Qore DataProvider Module Reference
2.2
|
The DataProvider module provides APIs for hierarchical data structures from arbitrary sources to be described, queried, introspected, and updated. It also supports data providers with request-reply semantics such as REST schemas or with SOAP messaging.
The data provider module supports high-performance reading (searching) and writing as well as record creation and upserting and transaction management if supported by the underlying data provider implementation as well.
The Qore command-line program qdp
provides a user-friendly interface to data provider functionality.
This module provides the following primary classes:
The following supporting type classes are also provided:
This module uses the "QORE_DATA_PROVIDERS"
environment variable to register data provider modules. Each data provider registration module must provide one of the following two public functions.
Implement a public function with the following signature to support dynamic discovery of data providers:
Implement a public function with the following signature to support dynamic discovery of data provider types:
Data provider registration modules declared in the "QORE_DATA_PROVIDERS"
environment variable must be separated by the platform-specific PathSep character as in the following examples:
export QORE_DATA_PROVIDERS=MyConnectionProvider:OtherConnectionProvider
set QORE_DATA_PROVIDERS=MyConnectionProvider;OtherConnectionProvider
$env:QORE_DATA_PROVIDERS="MyConnectionProvider;OtherConnectionProvider"
Data provider pipelines allow for efficient processing of record or other data in multiple streams and, if desired, in multiple threads.
Pipeline data can be any data type except a list, as list values are interpreted as multiple output values in pipeline procesor objects.
Bulk processing is processing of record data that is in "hash of lists" form, so a single hash, where each key value is a list of values for that key. Records can be formed as by taking each hash key and then using each list value in order for the values of each record. In case a key is assigned a single value instead of a list, it's interpreted as constant value for all records. Note that the first key value for bulk data must be a list of values in order for the bulk data to be properly detected.
Each pipeline processor element must declare if it is compatible with bulk processing by implementing the AbstractDataProcessor::supportsBulkApiImpl() method.
If a processor does not support the bulk API, but bulk data is submitted, then the bulk data will be separately iterated, and each record will be passed separately to the processor with a significant performance penalty when processing large volumes of data.
desc
key supporting a markdown description to data provider info (issue 4465)base_type
key in input type info in getInputInfo (issue 4311)limit
search optionDUPLICATE-RECORD
exception documentation for AbstractDataProvider::createRecord()