The Web Console is the official Web client relying on the REST API. Find out more in the section .
Available methods
- for importing data from
.CSV
files - to execute a SQL statement
/exp
to export data
/imp
streams tabular text data directly into a table. It supports CSV, TAB and pipe (|
) delimited inputs with optional headers. There are no restrictions on data size. Data types and structures are detected automatically, without additional configuration. In some cases, additional configuration can be provided to improve the automatic detection as described in .
:::note
The structure detection algorithm analyses the chunk in the beginning of the file and relies on relative uniformity of data. When the first chunk is non-representative of the rest of the data, automatic imports can yield errors.
If the data follows a uniform pattern, the number of lines which are analyzed for schema detection can be reduced to improve performance during uploads using the http.text.analysis.max.lines
key. Usage of this setting is described in the HTTP server configuration documentation.
:::
/imp
is expecting an HTTP POST request using the multipart/form-data
Content-Type with following optional URL parameters which must be URL encoded:
Further example queries with context on the source CSV file contents relative and the generated tables are provided in the below.
Names
Table and column names are subject to restrictions, the following list of characters are automatically removed:
[whitespace]
.
?
,
:
\
/
\\
\0
)
(
_
+
-
*
~
%
When the header row is missing, column names are generated automatically.
Consistency guarantees
/imp
benefits from the properties of the QuestDB storage model, although Atomicity and Durability can be relaxed to meet convenience and performance demands.
Atomicity
QuestDB is fully insured against any connection problems. If the server detects closed socket(s), the entire request is rolled back instantly and transparently for any existing readers. The only time data can be partially imported is when atomicity is in relaxed
mode and data cannot be converted to column type. In this scenario, any “defective” row of data is discarded and /imp
continues to stream request data into table.
Consistency
This property is guaranteed by consistency of append transactions against QuestDB storage engine.
Isolation
Durability
/imp
streams data from network socket buffer directly into memory mapped files. At this point data is handed over to the OS and is resilient against QuestDB internal errors and unlikely but hypothetically possible crashes. This is default method of appending data and it is chosen for its performance characteristics. In cases where transaction has to be resilient against OS errors or power losses physical durability can be enforced. At a cost of append performance QuestDB storage engine will also guarantee that each memory block is flushed to physical device.
Automatic schema detection
The following example uploads a file ratings.csv
which has the following contents:
An import can be performed with automatic schema detection with the following request:
curl -F data=@weather.csv 'http://localhost:9000/imp'
A HTTP status code of 200
will be returned and the response will be:
+-------------------------------------------------------------------------------+
| Location: | weather.csv | Pattern | Locale | Errors |
| Partition by | NONE | | | |
+-------------------------------------------------------------------------------+
| Rows imported | 49976 | | | |
+-------------------------------------------------------------------------------+
| 0 | ts | TIMESTAMP | 0 |
| 1 | visMiles | DOUBLE | 0 |
| 2 | tempF | INT | 0 |
| 3 | dewpF | INT | 0 |
+-------------------------------------------------------------------------------+
User-defined schema
To specify the schema of a table, a schema object can be provided:
curl \
-F schema='[{"name":"dewpF", "type": "STRING"}]' \
-F data=@weather.csv 'http://localhost:9000/imp'
+------------------------------------------------------------------------------+
| Location: | weather.csv | Pattern | Locale | Errors |
| Partition by | NONE | | | |
| Timestamp | NONE | | | |
+------------------------------------------------------------------------------+
| Rows handled | 49976 | | | |
| Rows imported | 49976 | | | |
+------------------------------------------------------------------------------+
| 0 | ts | TIMESTAMP | 0 |
| 1 | visMiles | DOUBLE | 0 |
| 2 | tempF | INT | 0 |
| 3 | dewpF | STRING | 0 |
+------------------------------------------------------------------------------+
Non-standard timestamp formats
Given a file weather.csv
with the following contents which contains a timestamp with a non-standard format:
The file can be imported as usual with the following request:
A HTTP status code of 200
will be returned and the import will be successful, but the timestamp column is detected as a STRING
type:
+-------------------------------------------------------------------------------+
| Location: | weather.csv | Pattern | Locale | Errors |
| Partition by | NONE | | | |
| Timestamp | NONE | | | |
+-------------------------------------------------------------------------------+
| Rows handled | 49976 | | | |
| Rows imported | 49976 | | | |
+-------------------------------------------------------------------------------+
| 0 | ts | STRING | 0 |
| 1 | visMiles | DOUBLE | 0 |
| 2 | tempF | INT | 0 |
| 3 | dewpF | INT | 0 |
+-------------------------------------------------------------------------------+
To amend the timestamp column type, this example curl can be used which has a schema
JSON object to specify that the ts
column is of TIMESTAMP
type with the pattern yyyy-MM-dd - HH:mm:ss
Additionally, URL parameters are provided:
overwrite=true
to overwrite the existing tabletimestamp=ts
to specify that thets
column is the designated timestamp column for this tablepartitionBy=MONTH
to set a on the table byMONTH
curl \
-F schema='[{"name":"ts", "type": "TIMESTAMP", "pattern": "yyyy-MM-dd - HH:mm:ss"}]' \
-F data=@weather.csv \
'http://localhost:9000/imp?overwrite=true×tamp=ts&partitionBy=MONTH'
The HTTP status code will be set to 200
and the response will show 0
errors parsing the timestamp column:
+------------------------------------------------------------------------------+
| Partition by | MONTH | | | |
| Timestamp | ts | | | |
+------------------------------------------------------------------------------+
| Rows handled | 49976 | | | |
| Rows imported | 49976 | | | |
+------------------------------------------------------------------------------+
| 0 | ts | TIMESTAMP | 0 |
| 1 | visMiles | DOUBLE | 0 |
| 2 | tempF | INT | 0 |
| 3 | dewpF | INT | 0 |
+------------------------------------------------------------------------------+
compiles and executes the SQL query supplied as a parameter and returns a JSON response.
:::note
:::
Overview
/exec
is expecting an HTTP GET request with following query parameters:
The parameters must be URL encoded.
Examples
This endpoint returns responses in the following format:
{
"query": string,
"columns": Array<{ "name": string, "type": string }>
"dataset": Array<Array<Value for Column1, Value for Column2>>,
"count": Optional<number>,
"timings": Optional<{ compiler: number, count: number, execute: number}>
}
You can find the exact list of types in the dedicated page.
Considering the query:
curl -G \
--data-urlencode "query=select timestamp, tempF from weather limit 2;" \
--data-urlencode "count=true" \
http://localhost:9000/exec
A HTTP status code of 200
is returned with the following response body:
This endpoint allows you to pass url-encoded queries but the request body is returned in a tabular form to be saved and reused as opposed to JSON.
/exp
is expecting an HTTP GET request with following parameters:
The parameters must be URL encoded.
Examples
Considering the query:
curl -G \
--data-urlencode "query=select AccidentIndex2, Date, Time from 'Accidents0514.csv'" \
--data-urlencode "limit=5" \
http://localhost:9000/exp
A HTTP status code of 200
is returned with the following response body:
"AccidentIndex","Date","Time"
200501BS00001,"2005-01-04T00:00:00.000Z",17:42
200501BS00002,"2005-01-05T00:00:00.000Z",17:36
200501BS00003,"2005-01-06T00:00:00.000Z",00:15
200501BS00004,"2005-01-07T00:00:00.000Z",10:35
200501BS00005,"2005-01-10T00:00:00.000Z",21:13
Malformed queries
A successful call to /exec
or /exp
which also contains a malformed query will return response bodies with the following format:
{
"query": string,
"error": string,
"position": number
}
The position
field is the character number from the beginning of the string where the error was found.
curl -G \
--data-urlencode "query=SELECTT * FROM table;" \
http://localhost:9000/exp
A HTTP status code of 200
is returned with the following response body:
{
"query": "SELECTT * FROM table;",
"error": "function, literal or constant is expected",
"position": 8