You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have stored our data in HDFS into subdirecties for every hour with batch processing .
Here are the examples to show how we write data in HDFS for every hours.
hdfs://team/data/logtype_a/2024/05/01/00
hdfs://team/data/logtype_b/2024/05/01/01
We have many data applications that uses that directories as input for long time.
And it's practically hard to change input directory in applications that uses directories.
But anyway I'd like to use ICEBERG with that data with metadata location "hdfs://team/data/rawlog/metadata".
Because we'd like to read whole data as single table. ( from Starrocks )
So my question is "Can I load data for every hours with single metadata location"
Query engine
Spark - pyspark
Question
We have stored our data in HDFS into subdirecties for every hour with batch processing .
Here are the examples to show how we write data in HDFS for every hours.
We have many data applications that uses that directories as input for long time.
And it's practically hard to change input directory in applications that uses directories.
But anyway I'd like to use ICEBERG with that data with metadata location "hdfs://team/data/rawlog/metadata".
Because we'd like to read whole data as single table. ( from Starrocks )
So my question is "Can I load data for every hours with single metadata location"
We uses PySpark to load data.
And here is example code that loads data.
The text was updated successfully, but these errors were encountered: