Install Dash Enterprise on Google Kubernetes Engine (Multi-node) - Airgapped

This guide can help you if you are a new Dash Enterprise customer looking to start with a Dash Enterprise 5 installation, or if you are upgrading from Dash Enterprise 4.X.

About the Installation

Dash Enterprise 5 runs on Kubernetes, an open-source system that automates application lifecycles. Several managed services allow you to get started with Kubernetes-based software quickly. In this guide, you’ll learn how to install Dash Enterprise on Google Kubernetes Engine (GKE), available with a Google Cloud Platform (GCP) account.

In GKE, you work with a GKE cluster, which consists of multiple worker machines that are also called nodes. You host your GKE cluster inside your Virtual Private Cloud (VPC).

View diagram

<img>

Installing Dash Enterprise is an automated process. You use a bootstrap node to run Plotly-provided scripts that provision and prepare the infrastructure. A bootstrap node is a virtual machine (VM) whose only purpose is to run the scripts. After Dash Enterprise is installed, you can decommission it. Using fresh VMs is the best practice because the scripts are unlikely to run into errors caused by other installed software. This guide describes how to use GCP’s Compute Engine service to provision the bootstrap node (please reach out to our Customer Success team if you’re unable to use Compute Engine).

As part of the automated infrastructure provisioning, the private GKE cluster is provisioned inside your VPC network with the Google Cloud CLI gcloud container clusters create command. A network load balancer is created as the main point of entry for all traffic directed towards Dash Enterprise.

Remember that these resources count towards your GCP quotas. Review your billing and quotas if necessary.

The cluster nodes belong to a single availability zone (AZ), and the scripts expect a single subnet. The Dash Enterprise core system isn’t currently configured for high availability; however, as long as the core system is available, Dash app developers can take advantage of features like app replicas to increase the availability of deployed apps. High availability for the core system may be supported in the future.

You’ll be installing Dash Enterprise as the single tenant on the GKE cluster—that is, no other software is installed on the cluster (except mandatory supporting software). Single-tenancy is well-suited for Dash Enterprise because it is a complex platform: Dash Enterprise interacts with the Kubernetes API to organize resources on the fly when developers perform tasks like deploying Dash apps and creating databases. Multi-tenancy is not currently supported.

Plotly uses Replicated to package and deliver Dash Enterprise. You’ll be interacting with the KOTS Admin Console, part of the Replicated toolset, as part of the installation and configuration. After the installation, you’ll continue to use the KOTS Admin Console for system administration such as performing Dash Enterprise upgrades.

Before You Install

In order for Dash app developers to use an airgapped Dash Enterprise instance, their apps need to fetch Python package dependencies from an internal index. (If there is no internal index available, developers need to place Python packages individually in their app’s files, which is not recommended for apps that require many packages because it involves additionally managing those packages’ dependencies).

Before committing to an airgapped Dash Enterprise installation, make sure your organization can provide an internal index. Dash Enterprise requires that the index have a TLS/SSL certificate from a globally trusted certificate authority (CA).

A common strategy is to create a mirror of pypi.org. If your organization is instead building its own custom index, here are the Python packages we recommend making available (note that the version numbers were obtained via pip freeze in May 2023):

Expand list of packages

alembic==1.10.4
amqp==5.1.1
ansi2html==1.8.0
anyio==3.6.2
aplus==0.11.0
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asn1crypto==1.5.1
astor==0.8.1
asttokens==2.2.1
attrs==23.1.0
autograd==1.5
autograd-gamma==0.5.0
backcall==0.2.0
beautifulsoup4==4.12.2
billiard==3.6.4.0
blake3==0.3.3
bleach==6.0.0
blinker==1.6.2
boto3==1.26.129
botocore==1.29.129
Brotli==1.0.9
cachetools==5.3.0
celery==5.2.7
certifi==2023.5.7
cffi==1.15.1
chardet==5.1.0
charset-normalizer==3.1.0
click==8.1.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
cloudpickle==2.2.1
colorcet==3.0.1
comm==0.1.3
contourpy==1.0.7
cryptography==40.0.2
cx-Oracle==8.3.0
cycler==0.11.0
dash==2.9.3
dash-ag-grid==2.0.0
dash-bootstrap-components==1.4.1
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-renderer==1.9.1
dash-table==5.0.0
dask==2023.4.1
databricks-sql-connector==2.5.1
datashader==0.14.4
datashape==0.5.2
debugpy==1.6.7
decorator==5.1.1
defusedxml==0.7.1
diskcache==5.6.1
distributed==2023.4.1
et-xmlfile==1.1.0
executing==1.2.0
fakeredis==1.0.3
fastjsonschema==2.16.3
filelock==3.12.0
Flask==2.2.2
Flask-Compress==1.13
Flask-Cors==3.0.10
flask-request-id==0.1
Flask-SQLAlchemy==2.5.1
fonttools==4.39.3
formulaic==0.6.1
fqdn==1.5.1
frozendict==2.3.8
fsspec==2023.5.0
future==0.18.3
graphlib-backport==1.0.3
greenlet==2.0.2
gunicorn==20.1.0
h5py==3.8.0
humanize==4.6.0
idna==3.4
importlib-metadata==6.6.0
importlib-resources==5.12.0
interface-meta==1.3.0
ipykernel==6.23.0
ipython==8.12.2
ipython-genutils==0.2.0
ipywidgets==8.0.6
isoduration==20.11.0
itsdangerous==2.1.2
jedi==0.18.2
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
jsonpointer==2.3
jsonschema==4.17.3
jupyter==1.0.0
jupyter-client==8.2.0
jupyter-console==6.6.3
jupyter-core==5.3.0
jupyter-dash==0.4.2
jupyter-events==0.6.3
jupyter-server==2.5.0
jupyter-server-terminals==0.4.4
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.7
jwt==1.3.1
kiwisolver==1.4.4
kombu==5.2.4
lifelines==0.27.7
llvmlite==0.40.0
locket==1.0.0
lorem==0.1.1
lz4==4.3.2
Mako==1.2.4
markdown-it-py==2.2.0
MarkupSafe==2.1.2
matplotlib==3.7.1
matplotlib-inline==0.1.6
mdurl==0.1.2
mistune==2.0.5
msgpack==1.0.5
multipledispatch==0.6.0
nbclassic==1.0.0
nbclient==0.7.4
nbconvert==7.4.0
nbformat==5.8.0
nest-asyncio==1.5.6
nested-lookup==0.2.22
notebook==6.5.4
notebook-shim==0.2.3
numba==0.57.0
numpy==1.24.3
oauthlib==3.2.2
openpyxl==3.1.2
packaging==20.9
pandas==1.5.3
pandocfilters==1.5.0
param==1.13.0
parso==0.8.3
partd==1.4.0
pexpect==4.8.0
pg8000==1.29.4
pickleshare==0.7.5
Pillow==9.5.0
pkgutil-resolve-name==1.3.10
platformdirs==3.5.0
plotly==5.14.1
progressbar2==4.2.0
prometheus-client==0.16.0
prompt-toolkit==3.0.38
psutil==5.9.5
psycopg2-binary==2.9.6
ptyprocess==0.7.0
pure-eval==0.2.2
pyarrow==12.0.0
pycparser==2.21
pyct==0.5.0
pydantic==1.10.7
Pygments==2.15.1
PyJWT==2.6.0
PyMySQL==1.0.3
pyodbc==4.0.39
pyOpenSSL==23.1.1
pyparsing==3.0.9
pyrsistent==0.19.3
python-dateutil==2.8.2
python-dotenv==1.0.0
python-json-logger==2.0.7
python-utils==3.5.2
pytz==2023.3
PyYAML==6.0
pyzmq==25.0.2
qtconsole==5.4.3
QtPy==2.3.1
redis==3.5.3
regex==2023.5.5
requests==2.30.0
retrying==1.3.4
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.3.5
s3transfer==0.6.1
scikit-learn==1.2.2
scipy==1.10.1
scramp==1.4.4
Send2Trash==1.8.2
six==1.16.0
sniffio==1.3.0
sortedcontainers==2.4.0
soupsieve==2.4.1
SQLAlchemy==1.4.48
stack-data==0.6.2
tabulate==0.9.0
tblib==1.7.0
tenacity==8.2.2
terminado==0.17.1
threadpoolctl==3.1.0
thrift==0.16.0
tinycss2==1.2.1
toolz==0.12.0
tornado==6.3.1
traitlets==5.9.0
typing-extensions==4.5.0
uri-template==1.2.0
urllib3==1.26.15
vaex-core==4.16.1
vaex-hdf5==0.12.3
vine==5.0.0
wcwidth==0.2.6
webcolors==1.13
webencodings==0.5.1
websocket-client==1.5.1
Werkzeug==2.2.2
widgetsnbextension==4.0.7
wrapt==1.15.0
wsgi-request-id==0.2
xarray==2022.9.0
zict==3.0.0
zipp==3.15.0

Important: Apps deployed to Dash Enterprise use Python 3.8.12, so be sure that the packages in your internal index are compatible with this version. When Dash Enterprise is airgapped, it is not possible for Dash app developers to change the Python version
that their apps use.

Similarly, if Dash app developers plan to deploy apps that depend on APT packages, you’ll need to prepare a custom APT repository with a TLS/SSL certificate from a globally trusted certificate authority (CA).

Prerequisites

Here’s what you’ll need to start your airgapped Dash Enterprise installation:

Self-signed certificates, internally signed certificates, and using multiple certificates are not supported. If you obtained your certificate as multiple files, you need to combine them into a single .pem file. You can do this with cat server.pem intermediate.pem trustedroot.pem > fullchain.pem on Linux or copy server.pem+intermediate.pem+trustedroot.pem fullchain.pem on Windows, replacing the file names if yours are different.

You’ll upload the full certificate chain and unencrypted private key during the configuration, and they will be used to terminate TLS/SSL. If you need to preview the required DNS entries, go to Creating DNS Entries, but note that you’ll only be able to create them after the installation.

Calculating a more accurate IP requirement: Kubernetes needs one IP address for each service or pod. The Dash Enterprise core system uses 46 services and 105 (Standard) or 151 (Premium) pods. In addition, pods are created when Dash app developers perform certain actions. If you know about your organization’s intended usage of Dash Enterprise, such as how many apps and workspaces developers plan to create, you can calculate a more accurate IP requirement than the rule of thumb above.

Preparing Your Installation

Contact our Customer Success team to get started. We’ll ask you:

Obtaining Your Installation Plan

When we have this information, we’ll send you a tailor-made installation script as well as a link and password to a download portal from which you’ll need to download airgap bundles. Your Installation Plan is tailor-made based on your conversation with Customer Success and contains everything you need to install Dash Enterprise for your organization.

Your Installation Plan contains:

Defining Variables in the Scripts

Unzip your Installation Plan and open the config file. At the top, edit the following variable values:

About storing and resetting this password: We recommend storing this password in your organization’s password manager, and giving access to any other members of your team who will be managing the Dash Enterprise system (notably performing upgrades and obtaining support bundles). This password is not retrievable with a kubectl command. It can be changed in the Admin Console UI by anyone who is able to log in with the current password. If lost, reset it by downloading the KOTS CLI and running kubectl kots reset-password plotly-system.

Finally, note the value for KOTS_VERSION. You’ll need this version number in a later step.

Downloading Your License and Airgap Bundles

In this step, you’ll download your Dash Enterprise license file as well as the airgap bundles required to install Dash Enterprise and the KOTS Admin Console. Note that the Dash Enterprise airgap bundle is approximately 15 GB, and
the KOTS airgap bundle is approximately 1 GB.

To download the Dash Enterprise license and airgap bundles:

  1. On your workstation, use the link and password provided by our Customer Success team to go to the download portal.
  2. In the left sidebar, make sure Bring my own cluster is selected.
  3. Under License, select Download license.
  4. Under dash-enterprise Airgap Bundle, select Download airgap bundle.
  5. Under KOTS Airgap Bundle, select the airgap bundle corresponding to the version number for KOTS_VERSION in your config file; then select Download bundle. You’ll use this bundle to install the KOTS Admin Console in a later step.

Preparing Your Bootstrap Node

In this step, you’ll use GCP’s Compute Engine service to provision a VM that runs on Ubuntu 20. This VM will serve as your bootstrap node.

To create a VM:

  1. In the Google Cloud console, select the project you want to use for your GKE cluster.

  2. Go to Compute Engine.

  3. Go to VM Instances; then select Create instance.

  4. In Name, enter a name for your VM.

  5. In Machine type, select e2-medium (2 vCPU, 4 GB memory).

<img>

  1. Configure the boot disk:

  2. Under Boot disk, select Change. The Boot disk pane opens.

  3. In Operating System, select Ubuntu.
  4. In Version, select Ubuntu 20.04 LTS (x86).
  5. In Size (GB), enter 50.

<img>

  1. Select Select.

  2. In Service account, select the service account you want to use.

  3. Review the network interface:

  4. Expand Advanced options.

  5. Expand Networking.
  6. Under Network interfaces, make sure the selected network has access to the internet (required by the cluster provisioning and preparation scripts that you’ll run on the bootstrap node). You’ll also need
    to open firewall rules on this network in step 12, so change the network to a more appropriate one if needed.

  7. Use the default settings for everything else or adjust them to your preference; then select Create.

  8. Add your public SSH key to the bootstrap node:

    1. Select your newly created VM; then select Edit.
    2. Under Security and access, select Add item.
    3. In SSH key 1, enter your SSH public key.
    4. Select Save. The username and key are displayed under SSH keys. You’ll use this username when SSHing into the VM in a later step.

<img>

  1. Review your network and bootstrap node settings to allow your bootstrap node to access your private VPC network (required by the cluster preparation script when pushing supporting software images to your private container registry).
    There are a few ways to accomplish this, such as adding the bootstrap node to your organization’s VPN. If you opt to add your private VPC network as a secondary interface on the bootstrap node (not recommended unless you have precautions in place), make sure the private subnet has Private Google Access set to On.

  2. Configure the firewall rules:

<img>

Domain Purpose I/O
packages.cloud.google.com Download the Google Cloud CLI  Outbound
dl.k8s.io Download kubectl Outbound
github.com Download Cert Manager and ImageSwap Outbound
raw.githubusercontent.com Download ImageSwap Outbound
quay.io Download Cert Manager and ImageSwap Outbound
*.istio.io Download Istio Outbound
kots.io Download the KOTS plug-in Outbound
ubuntu.com apt-get packages from Ubuntu Outbound
launchpad.net apt-get packages from Debian Outbound

Moving Files to Your Bootstrap Node

In this step, you’ll move your infrastructure provisioning script, service account key file, and KOTS airgap bundle to the bootstrap node. One way to do this is to use secure copy protocol (SCP).

To transfer the infrastructure provisioning script, service account key file, and KOTS airgap bundle to your bootstrap node’s home directory using SCP:

sh scp -i path/to/private/key path/to/infra/provisoning/script path/to/service/account/key path/to/kots/airgap/bundle &lt;username&gt;@&lt;bootstrap-ip&gt;:.
where path/to/private/key is the path to the SSH private key corresponding to the public key you added to your bootstrap node, path/to/infra/provisoning/script is the path to your infrastructure provisioning script, path/to/service/account/key is the path to your service account key file, path/to/kots/airgap/bundle is the path to the KOTS airgap bundle, &lt;username&gt; is the username displayed under SSH keys in your bootstrap node information, and &lt;bootstrap-ip&gt; is the external IP address displayed under Network interfaces in your bootstrap node information.

Provisioning and Preparing the GKE Cluster

Provisioning the GKE Cluster

In this step, you’ll run the cluster provisioning script.

To provision the GKE cluster:

  1. SSH into your bootstrap node:
    sh ssh -i path/to/private/key &lt;username&gt;@&lt;bootstrap-ip&gt;
    where path/to/private/key is the path to the SSH private key corresponding to the public key you added to your bootstrap node, &lt;username&gt; is the username displayed under your bootstrap node’s SSH keys, and &lt;bootstrap-ip&gt; is the external IP address displayed under Network interfaces.

  2. In the home directory of your bootstrap node, run the infrastructure provisioning script:
    bash provision_infra_gcp_airgapped.sh

The script takes several minutes to complete. Continue when you are returned to the command prompt.

You can review the cluster in the GCP console by going to Kubernetes Engine > Clusters and selecting it from the list.

  1. Make sure that there are no network settings that would prevent the nodes in your cluster from communicating with one another.

Adding the New Network to Existing Resources

When the script provisioned the GKE cluster, it created a new network for the Kubernetes control plane using the IP range that you specified for K8S_MASTER_CIDR.

In this step, you’ll open communication between the new network created for the Kubernetes control plane and your private network that the nodes run on. You’ll also allow your bootstrap node to reach the new network, which is required for the kubectl commands that you’ll run as part of the cluster preparation script.

To add the new network to your existing resources:

  1. Open the communication between the Kubernetes control plane and the nodes:

  2. In the Google cloud console, go to VPC networks.

  3. Select your private network.
  4. Go to Firewalls.
  5. Add an egress firewall rule where the destination IPv4 ranges is the range you defined for K8S_MASTER_CIDR.
  6. Add an ingress firewall rule where the source IPv4 ranges is the range you defined for K8S_MASTER_CIDR.

For example, if your K8S_MASTER_CIDR is 172.16.1.16/28, your egress and ingress firewall rules look like:

<img>

  1. Allow your bootstrap node to reach the Kubernetes control plane. One way to do this is to add a route to your bootstrap node:
    sh sudo ip route add &lt;control-plane-ip-range&gt; via &lt;private-subnet-gateway&gt; dev ens5
    where &lt;control-plane-ip-range&gt; is the IP range that you defined for K8S_MASTER_CIDR and &lt;private-subnet-gateway&gt; is your private subnet gateway.

  2. Test the cluster connection from the bootstrap node:
    kubectl cluster-info

This command should output information about the Kubernetes cluster you just provisioned. If it doesn’t, more environment-specific adjustments may be needed. Ensure
that you are able to successfully run this command before continuing to the next step.

Preparing the Cluster and Port-Forwarding the KOTS Admin Console

In this step, you’ll run the cluster preparation script. This script does the following:

To prepare your cluster and port-forward the KOTS Admin Console:

  1. In the home directory of your bootstrap node, run the cluster preparation script:
    bash prepare_cluster_gcp_airgapped.sh

  2. When you are prompted for the kots install location by Enter installation path (leave blank for /usr/local/bin), press Enter to accept the default.

  3. When you are prompted to grant write permissions to /usr/local/bin, press y (you will not be prompted for a password).

The script takes several minutes to complete. Continue when you see the message Forwarding from 0.0.0.0:8800 -> 3000 (do not exit yet).

If you exit by mistake, restart the port-forward with kubectl port-forward -n plotly-system svc/kotsadm --address 0.0.0.0 8800:3000.

Installation

Now that your GKE cluster is provisioned, you’re ready to install Dash Enterprise on it. The KOTS Admin Console will take you through uploading your Dash Enterprise license and airgap bundle.

To access the KOTS Admin Console and install Dash Enterprise:

  1. On your workstation, go to http://&lt;bootstrap-ip&gt;:8800, where &lt;bootstrap-ip&gt; is the external IP address of your bootstrap node.
  2. Enter the password that you set for ADMIN_PASSWORD in Defining Variables in the Scripts; then select Log in. You are prompted to upload your license.
  3. Drag or browse to the license file in your Installation Plan; then select Upload license. The Admin Console opens to the Install in airgapped environment page, where your private container registry information is automatically entered.
  4. In the area labelled “Drag your airgap bundle here or choose a bundle to upload,” drag or browse to the Dash Enterprise .airgap file you downloaded earlier.
  5. If you have not retrieved the Dash Enterprise images into your private container registry before the installation (SKIP_PUSH_IMAGES and SKIP_REGISTRY_CHECK are false in your config file), clear the Disable Pushing Images to Registry checkbox. (Note that this setting will be saved and applied when you upgrade Dash Enterprise. Change it in the Admin Console registry settings if you don’t want to keep this workflow for Dash Enterprise upgrades).
  6. Select Upload airgap bundle. The upload can take several minutes.

When the upload is complete, the KOTS Admin Console opens to the Configure Dash Enterprise page.

Uploading the Certificate and Running Preflight Checks

Now that Dash Enterprise is installed, you’re ready for configuration. The KOTS Admin Console will take you through uploading your TLS/SSL certificate and running preflight checks.

On the Configure Dash Enterprise page, do the following:

  1. Upload your TLS/SSL certificate and key.
  2. Select Continue. The Admin Console runs preflight checks, which can take up to a few minutes.
  3. Wait for the preflight checks to complete. If the results are all successful, select Continue. If you encounter an error, contact Customer Success.
    The Admin Console opens to the dashboard, where the status of the system is displayed. The system is not ready until DNS entries are created, so it is normal for the status to display “Missing” or “Unavailable”.
  4. On your bootstrap node, press Ctrl+C to disconnect from the Admin Console for now (you can reconnect with kubectl port-forward -n plotly-system svc/kotsadm --address 0.0.0.0 8800:3000).

Creating DNS Entries

In this step, you’ll create the DNS entries according to your organization’s best practices.

To create the DNS entries for Dash Enterprise:

  1. On your bootstrap node, get the load balancer IP:

sh kubectl get service -n plotly-system ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}' && echo

  1. Create the DNS entries according to the table below.
Name Type Value
&lt;base-domain&gt; A Record &lt;load-balancer-ip&gt;
api-&lt;base-domain&gt; CNAME &lt;base-domain&gt;
ws-&lt;base-domain&gt; CNAME &lt;base-domain&gt;
git-&lt;base-domain&gt; CNAME &lt;base-domain&gt;
registry-&lt;base-domain&gt; CNAME &lt;base-domain&gt;
auth-&lt;base-domain&gt; CNAME &lt;base-domain&gt;
admin-&lt;base-domain&gt; CNAME &lt;base-domain&gt;

Your base domain is an A record whose value is the IP of the load balancer that you obtained in step 1. The sub-domains required for Dash Enterprise are CNAMES whose values are your base domain.

  1. Once the DNS entries are created and propagated, go to the Admin Console using its sub-domain: https://admin-&lt;your-dash-enterprise-server&gt;.

Continue when the status in the Admin Console is Ready.

<img>

Configuring User Access

Make sure members of your organization who will be using Dash Enterprise have the right accesses. The exact steps look different depending on your environment. If your organization uses a VPN, configuring this access might involve defining and assigning VPN profiles.

Using the load balancer IP you obtained, configure the access as follows:

Important: Port 22 is required for app developers to deploy their apps with git push over SSH. Dash Enterprise also supports deployments over HTTPS, but not if you configure authentication using a SAML or OIDC identity provider.
If you plan to use a SAML or OIDC identity provider, make sure port 22 is open to app developers.

Accessing Dash Enterprise

Before you can log in to Dash Enterprise at https://&lt;your-dash-enterprise-server&gt;, you’ll need to create a Dash Enterprise user in Keycloak. Keycloak is the identity and access management solution for Dash Enterprise.

You’ll be returning to Keycloak when you’re ready to configure authentication for Dash Enterprise. To learn about important settings and choose between different identity provider modes, go to Using Keycloak.

Obtaining and Storing the Keycloak Password

In this step, you’ll retrieve the Keycloak password that is stored as a secret in your cluster and save it according to your organization’s best practices.

To obtain and store the Keycloak password:

  1. On your bootstrap node, retrieve the password to Keycloak (this displays the password in plain text):

sh kubectl get secret keycloak-secrets -n plotly-system -o jsonpath='{.data.KEYCLOAK_PASSWORD}' | base64 -d && echo

Note about recovering the Keycloak password: If you change this password via the Keycloak interface, it will no longer correspond to what is
stored in your cluster. We recommend keeping it as is so that you can always recover it with this kubectl get secret command.

  1. Copy the password.
  2. Add the password to your organization’s password manager or other secure storage, along with the username admin. You can share these credentials with other members in your organization who need to access Keycloak.

Creating Your Dash Enterprise Admin User

In this step, you’ll log in to Keycloak using the stored credentials and create a new user with the admin role. The admin role grants access to the Admin section of the Dash Enterprise App Manager, which you’ll use to configure system limits
in a later step. Learn more about the admin role.

To access Keycloak and create your admin user:

  1. Go to https://auth-&lt;your-dash-enterprise-server&gt;
  2. Select Administration Console.
  3. Enter the Keycloak credentials that you obtained and stored.

<img>

  1. Select Sign In.
  2. Make sure Dash is selected in the realm list in the top left corner.

    Dash realm

  3. Select Users > Add User.

  4. In Username, enter the username you want to use.
  5. Select Save. Additional settings become available.
  6. Go to Credentials.
  7. In Password and Password Confirmation, enter the password you want to use.
  8. Select Set Password; then set password again to confirm.
  9. Assign the admin role:
    1. Go to Role Mappings.
    2. In Client Roles, select dash.
    3. In Available Roles, select admin; then select Add selected. Note that if you intend on deploying Dash apps, you’ll also need the licensed_user role, and assigning this role consumes a license seat.

To log into Dash Enterprise with this user, go to https://&lt;your-dash-enterprise-server&gt; and enter the credentials that you saved in Keycloak. Dash Enterprise opens to the Portal. Go to the App Manager by selecting Apps > App Manager.

<img>

You can now safely delete the VM that you used as your bootstrap node.

Setting System Limits

In this step, you’ll safeguard Dash Enterprise against usage that would cause the Kubernetes cluster to exceed the resources it can support. Specifically, you’ll add limits to the amount of pods and volumes (PVC) that can exist, temporarily preventing Dash app developers from performing actions that would create more pods and volumes on the cluster when the limit is reached. To do so, you’ll use the System Limits setting in the Admin section of the App Manager. To learn how to calculate and set limits that are appropriate for your cluster, go to Pod and Volume Limits.