Iceberg is a high-performance format for huge analytic tables. Iceberg brings the reliability and simplicity of SQL tables to big data, while making it possible for engines like Spark, Trino, Flink, Presto, Hive and Impala to safely work with the same tables, at the same time. Background and documentation is available at https://iceberg.apache.org Status Iceberg is under active development at the Apache Software Foundation. The Iceberg format specification is stable and new features are added with each version. The core Java library is located in this repository and is the reference implementation for other libraries. Documentation is available for all libraries and integrations. Collaboration Iceberg tracks issues in GitHub and prefers to receive contributions as pull requests. Community discussions happen primarily on the dev mailing list or on specific issues. Building Iceberg is built using Gradle with Java 11, 17, or 21. To invoke a build and run tests: ./gradlew build To skip tests: ./gradlew build -x test -x integrationTest To fix code style for default versions: ./gradlew spotlessApply To fix code style for all versions of Spark/Hive/Flink:./gradlew spotlessApply -DallModules Iceberg table support is organized in library modules: iceberg-common contains utility classes used in other modules iceberg-api contains the public Iceberg API iceberg-core contains implementations of the Iceberg API and support for Avro data files, this is what processing engines should depend on iceberg-parquet is an optional module for working with tables backed by Parquet files iceberg-arrow is an optional module for reading Parquet into Arrow memory iceberg-orc is an optional module for working with tables backed by ORC files iceberg-hive-metastore is an implementation of Iceberg tables backed by the Hive metastore Thrift client iceberg-data is an optional module for working with tables directly from JVM applications Iceberg also has modules for adding Iceberg support to processing engines: iceberg-spark is an implementation of Spark's Datasource V2 API for Iceberg with submodules for each spark versions (use runtime jars for a shaded version) iceberg-flink contains classes for integrating with Apache Flink (use iceberg-flink-runtime for a shaded version) iceberg-mr contains an InputFormat and other classes for integrating with Apache Hive NOTE The tests require Docker to execute. On macOS (with Docker Desktop), you might need to create a symbolic name to the docker socket in order to be detected by the tests: sudo ln -s $HOME/.docker/run/docker.sock /var/run/docker.sock In some cases the testcontainer may exit with an initialization error because of an illegal state exception in the GenericContainer. One work around for this problem is to set selinux into permissive mode before running the tests. sudo setenforce Permissive ./gradlew ... sudo setenforce Enforcing Engine Compatibility See the Multi-Engine Support page to know about Iceberg compatibility with different Spark, Flink and Hive versions. For other engines such as Presto or Trino, please visit their websites for Iceberg integration details. Implementations This repository contains the Java implementation of Iceberg. Other implementations can be found at: (责任编辑:) |