Parquet

Parquet file data source

Properties

Properties supported in this source are shown below ( * indicates required fields )
Property
Description
Name *
Name of the data source
Description
Description of the data source
Processing Mode
Select for batch and un-select for streaming. If 'Batch' is selected the value of the switch is set to true. If 'Streaming' is selected the value of the switch is set to false.Default: true
Path *
Path to where the file or folder is locatedExample: s3a://[bucketpath]/load.csv
Schema
Source schema to assist during the design of the pipeline
Filename Column
Adds the absolute path of the file being read as a new column with the string provided as column name.Example: file_name
Select Fields / Columns
Comma separated list of fields / column names to select from sourceDefault: *
Filter Expression
SQL where clause for filtering recordsExample: date = '2022-01-01',year=22 and month = 6 and day = 2
Distinct Values
Select rows with distinct column valuesDefault: false
Path Glob Filter
An optional glob pattern to only include files with paths matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of partition discovery.
Recursive File Lookup
Recursively load files and it disables partition inferring. If your folder structure is partitioned with columnName=value (Eg. processDate=2022-01026), then using the recursive option WILL NOT read the partitions correctly.Default: false
Normalize Column Names
Normalizes column names by replacing special characters ,;{}()&/\n\t= and space with the given stringExample: _
Merge Schema
Sets whether we should merge schemas collected from all Parquet part-files. This will override spark.sql.parquet.mergeSchema
Ignore Corrupt Files
Spark allows you to use spark.sql.files.ignoreCorruptFiles to ignore corrupt files while reading data from files. When set to true, the Spark jobs will continue to run when encountering corrupted files and the contents that have been read will still be returned.
Ignore Missing Files
Spark allows you to use spark.sql.files.ignoreMissingFiles to ignore missing files while reading data from files
Modified Before
An optional timestamp to only include files with modification times occurring before the specified Time. The provided timestamp must be in the following form: YYYY-MM-DDTHH:mm:ss (e.g. 2020-06-01T13:00:00)
Modified After
An optional timestamp to only include files with modification times occurring after the specified Time. The provided timestamp must be in the following form: YYYY-MM-DDTHH:mm:ss (e.g. 2020-06-01T13:00:00)
Watermark Field Name
Field name to be used as watermark. If unspecified in streaming mode, the default field name is 'tempWatermark'.Example: myConsumerWatermarkDefault: tempWatermark
Watermark Value
Watermark value settingExample: 10 seconds,2 minutes