You can print your current datasources to stdout by running:

    To save your datasources to a file run:

    By default, default (null) values will be omitted. Use the -d flag to include them. If you want back references to be included (e.g. a column to include the table id it belongs to) use the -b flag.

    1. Open Sources -> Databases to export all tables associated to a single or multiple databases. (Tables for one or more tables, Druid Clusters for clusters, Druid Datasources for datasources)
    2. Click Actions -> Export to YAML
    3. If you want to import an item that you exported through the UI, you will need to nest it inside its parent element, e.g. a database needs to be nested under databases a table needs to be nested inside a database element.

    In order to obtain an exhaustive list of all fields you can import using the YAML import run:

    1. superset export_datasource_schema

    As a reminder, you can use the -b flag to include back references.

    Importing Datasources from YAML

    In order to import datasources from a YAML file(s), run:

    1. superset import_datasources -p <path> -r

    The sync flag -s takes parameters in order to sync the supplied elements with your file. Be careful this can delete the contents of your meta database. Example:

    This will sync all metrics and columns for all datasources found in the in the Superset meta database. This means columns and metrics not specified in YAML will be deleted. If you would add tables to columns,metrics those would be synchronised as well.

    If you don’t supply the sync flag (-s) importing will only add and update (override) fields. E.g. you can add a verbose_name to the column ds in the table random_time_series from the example datasets by saving the following YAML to file and then running the import_datasources command.

    1. - database_name: main
    2. tables:
    3. columns:
    4. verbose_name: datetime