[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-04-22。"],[[["The Flink Bigtable connector enables real-time streaming, serialization, and writing of data from a specified data source to a Bigtable table using either the Apache Flink Table API or the Datastream API."],["To use the connector, a pre-existing Bigtable table with predefined column families is required as the data sink."],["The connector offers three built-in serializers for converting data into Bigtable mutation entries: `GenericRecordToRowMutationSerializer`, `RowDataToRowMutationSerializer`, and `FunctionRowMutationSerializer`, with the option for custom serializers as well."],["There are two serialization modes available, column family mode, where all data is written to a single column family, and nested-rows mode, where each top-level field represents a column family."],["When using Bigtable as a data sink with the connector, exactly-once processing behavior is achieved automatically due to Bigtable's idempotent `mutateRow` mutation, provided timestamps aren't changed on retries and the pipeline satisfies exactly-once semantics."]]],[]]