The start point of timestamp when a query is started, a json string specifying a starting timestamp for each TopicPartition. The returned offset for each partition is the earliest offset whose timestamp is greater than or equal to the given timestamp in the corresponding partition. If the matched offset doesn't exist, the query will fail immediately to prevent unintended read from such partition. (This is a kind of limitation as of now, and will be addressed in near future.)Spark simply passes the timestamp information to KafkaConsumer.offsetsForTimes, and doesn't interpret or reason about the value.For more details on KafkaConsumer.offsetsForTimes, please refer javadoc for details.Also the meaning of timestamp here can be vary according to Kafka configuration (log.message.timestamp.type): please refer Kafka documentation for further details.Note: This option requires Kafka 0.10.1.0 or higher.Note2: startingOffsetsByTimestamp takes precedence over startingOffsets.Note3: For streaming queries, this only applies when a new query is started, and that resuming will always pick up from where the query left off. Newly discovered partitions during a query will start at earliest.Example: {"topicA":{"0": 1000, "1": 1000}, "topicB": {"0": 2000, "1": 2000}}