Delta

Delta file data source

Properties

Properties supported in this source are shown below ( * indicates required fields )
Property
Description
Name *
Name of the data source
Description
Description of the data source
Processing Mode
Select for batch and un-select for streaming. If 'Batch' is selected the value of the switch is set to true. If 'Streaming' is selected the value of the switch is set to false.Default: true
Schema
Source schema to assist during the design of the pipeline
Path or Table Name *
Path where the file is located or The name of the delta table to read if saved to data catalog
Select Fields / Columns
Comma separated list of fields / columns to select from sourceExample: firstName, lastName, address1, address2, city, zipcodeDefault: *
Filter Expression
SQL where clause for filtering records. This is also used to load partitions from the sourceExample: date=2022-01-01,year = 22 and month = 6 and day = 2
Distinct Values
Select rows with distinct column valuesDefault: false
Enable Format Check
Strictly checks for delta tables if enabled. Disabling lets you query Athena, Presto and Hive tables on top of Delta files.Default: false
Path Glob Filter
Optional glob pattern to only include files with paths matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of partition discovery
Recursive File Lookup
Recursively load files and it disables partition inferring. If your folder structure is partitioned with columnName=value (Eg. processDate=2022-01026), then using the recursive option WILL NOT read the partitions correctly.Default: false
Version of Table
Timestamp string in yyyyMMddHHmmssSSS format OR version is a long value that can be obtained from the output of DESCRIBE HISTORY events. Currently we only support the version number as a string and DO NOT support the table@v[Number] syntaxExample: yyyyMMddHHmmssSSSDefault: yyyyMMddHHmmssSSS
Normalize Column Names
Normalizes column names by replacing special characters ,;{}()&/\n\t= and space with the given stringExample: _
Ignore Corrupt Files
If selected, jobs will continue to run when encountering corrupted files and the contents that have been read will still be returned
Ignore Missing Files
Select to ignore missing files while reading data from files
Modified Before
An optional timestamp to only include files with modification times occurring before the specified time. The provided timestamp must be in the following format: YYYY-MM-DDTHH:mm:ssExample: 2020-06-01T13:00:00
Modified After
An optional timestamp to only include files with modification times occurring after the specified time. The provided timestamp must be in the following format: YYYY-MM-DDTHH:mm:ssExample: 2020-06-01T13:00:00
Watermark Field Name
Field name to be used as watermark. If unspecified in streaming mode, the default field name is 'tempWatermark'.Example: myConsumerWatermarkDefault: tempWatermark
Watermark Value
Watermark value settingExample: 10 seconds,2 minutes
Cache
MEMORY_ONLY: Persist data in memory only in deserialized formatMEMORY_AND_DISK: Persist data in memory and if enough memory is not available evicted blocks will be stored on diskMEMORY_ONLY_SER: Same as MEMORY_ONLY but difference being it persists in serialized format. This is generally more space-efficient than deserialized format, but more CPU-intensive to read.MEMORY_AND_DISK_SER: Same as MEMORY_AND_DISK storage level difference being it persists in serialized formatDISK_ONLY: Persist the data partitions only on diskMEMORY_ONLY_2, MEMORY_AND_DISK_2: Same as the levels above, but replicate each partition on two cluster nodesOFF_HEAP: Similar to MEMORY_ONLY_SER, but store the data in off-heap memory. This requires off-heap memory to be enabledDefault: NONE