How to write kafka record key to separate column (not primary key) by JDBC sink connector?

I'd like to write kafka record key to PostgreSQL to separate column and this column should NOT be a primary key. How to configure my JDBC sink connector?

Example: Record in kafka topic:

  1. Key = "abc"; value = "value_1"
  2. Key = "abc"; value = "value_2"

Result in Database:

COL1 ; COL2 
abc  ; value_1
abc  ; value_2

COL1 is not a primary key

  • p.s.1. I don't have problems with value writing.
  • p.s.2. pk.mode creates primary key from kafka record key and it's not what I want.

In order to not write a database key, you'll need to relocate the key into the ConnectRecord value Struct.

You should be able to use a transform such as this one (admittedly, it's old and mentions S3, but should work for this purpose)

After installing into your Connect cluster classpath, and configuring in the connector, all records will be written to the database with columns key|value|topic|timestamp. Also make sure that pk.mode=none, and use JSONConverter with schemas enabled (or Avro / Protobuf if using Schema Registry)

You can also rename/drop columns if you don't need the topic or timestamp and explicitly want "col1", "col2"