Skip to content

Conversation

@jherico
Copy link

@jherico jherico commented Sep 27, 2021

Fix for https://issues.apache.org/jira/browse/FLINK-24379

What is the purpose of the change

Add missing components required to use the AWS Glue Schema registry in Table connectors.

Brief change log

  • Added GluSchemaRegistryAvroFormatFactory class and reference to same in flink-formats/flink-avro-glue-schema-registry/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory

Verifying this change

This change added tests and can be verified as follows:

  • Added test class org.apache.flink.formats.avro.glue.schema.registry.GlueSchemaRegistryAvroDeserializationSchemaTest based off org.apache.flink.formats.avro.registry.confluent.RegistryAvroFormatFactoryTest
  • Verified that the test passed after some work correcting the options naming.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): yes
    • Adds internal dependency from flink-avro-glue-schema-registry to the table API jars
  • The public API, i.e., is any changed class annotated with @Public(Evolving): no public API changed
  • The serializers: don't know
  • The runtime per-record code paths (performance sensitive): no
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
  • The S3 file system connector: no

Documentation

  • Does this pull request introduce a new feature? No? It's adding an existing feature that is present in all the other Avro formats to the AWS Glue Avro format
  • If yes, how is the feature documented? JavaDocs, tweaked from the similar docs in RegistryAvroFormatFactory

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 6c5edf9 (Mon Sep 27 05:07:56 UTC 2021)

Warnings:

  • 1 pom.xml files were touched: Check for build and licensing issues.
  • No documentation files were touched! Remember to keep the Flink docs up to date!
  • This pull request references an unassigned Jira ticket. According to the code contribution guide, tickets need to be assigned before starting with the implementation work.

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Sep 27, 2021

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run azure re-run the last Azure build

Copy link
Contributor

@Airblader Airblader left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I left some comments regarding the consistency for now. However, there seem to be fundamental issues such as incorrect formatting etc. Please make sure to follow the IDE setup guide and update the PR accordingly.

@dannycranmer
Copy link
Contributor

Can you please add an e2e test for the Table API support?


@Override
public ChangelogMode getChangelogMode() {
return ChangelogMode.upsert();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this mode refer to? Does this impact the type of connector? For instance, Kafka which supports upsert, and kinesis which does not (insert only)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually a bit confused on this point. The Format interface requires implementation of the getChangelogMode method, but I don't understand why this is tied to the format at all. If I'm using a Kafka-Upsert connector then I should get upsert records... if I'm using a generic Kafka stream I should expect all kinds of records, both of which are totally orthogonal to the format used.

When I look at the other Format implementations they all seem to either be using the explicit form of ChangelogMode.all() or ChangelogMode.insetOnly() with no obvious (to me) pattern for why a given Format implementation gets one or the other.

I've changed the code in these two instances to ChangelogMode.all() to match the subset of other formats that also does this, including the DebeziumAvroFormatFactory, but without a deeper understanding of how the Format.getChangelogMode is used, I can't reason more precisely about what this should be.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The format can define a changelog, but it'll be up to the connector to defer to the format for providing the changelog mode. SocketDynamicTableSource in flink-examples would be an example where the connector just defers this to the format.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I did a test of an actual flink job last night and discovered that unless this is set to insertOnly you can't create an upsert-kafka connector using it. So I clearly don't understand what this changelog mode refers to, but I'm just going to cope.

@dannycranmer
Copy link
Contributor

We will also need a documentation update similar to this:

If you do not have capacity to contribute this can you please raise a follow up Jira and link to this one

@MartijnVisser
Copy link
Contributor

We're receiving a lot of community feedback that new features are not or not properly documented. It would be really good if we can get the documentation in with this PR.

@jherico
Copy link
Author

jherico commented Sep 28, 2021

Responded to feedback, modified changelog mode to match existing code, added format document.

{{< label "Format: Serialization Schema" >}}
{{< label "Format: Deserialization Schema" >}}

The Glue Schema Registry (``avro-glue``) format allows you to read records that were serialized by the ``com.amazonaws.services.schemaregistry.serializers.avro.AWSKafkaAvroSerializer`` and to write records that can in turn be read by the ``com.amazonaws.services.schemaregistry.deserializers.avro.AWSKafkaAvroDeserializer``. These records have their schemas stored out-of-band in a configured registry provided by the AWS Glue Schema Registry [service](https://docs.aws.amazon.com/glue/latest/dg/schema-registry.html#schema-registry-schemas).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Glue Schema Registry

Please add "AWS" here:

The AWS Glue Schema Registry

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will fix.

Format Options
----------------

Yes, these options have inconsistent naming convnetions. No, I can't fix it. This is for consistentcy with the existing [AWS Glue client code](https://github.com/awslabs/aws-glue-schema-registry/blob/master/common/src/main/java/com/amazonaws/services/schemaregistry/utils/AWSSchemaRegistryConstants.java#L20).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to enforce a naming convention consistent with other Flink formats/connectors, we could transform them in the factory.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm doing another pass on the naming of the options and will have it in the next commit. Should be more inline with other formats, and as you suggest I'm doing the translation inside the factory.

name: Avro AWS Glue Schema Registry
maven: flink-avro-glue-schema-registry
category: format
sql_url: https://repo.maven.apache.org/maven2/org/apache/flink/flink-avro-glue-schema-registry/$version/flink-avro-glue-schema-registry-$version.jar
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this needs to include the $scala_version, and if possible, only render for Scala 2.12

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The module is pure java and the only dependencies that it has that are scala-binary-versioned are test or provided scope. The module is currently located here in the central repository, so I don't think this is the case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has recently changed, since 1.15 it will only support Scala 2.12. You can see the new artifact here.

@jherico jherico requested a review from dannycranmer October 1, 2021 09:00
<!-- The above dependency hard-codes the Scala 2.12 binary version of this jar, so we replace it -->
<dependency>
<groupId>com.kjetland</groupId>
<artifactId>mbknor-jackson-jsonschema_${scala.binary.version}</artifactId>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jherico Does this change mean that the format will be compatible with Scala 2.11 also? Although 2.11 has been recently removed

@dannycranmer
Copy link
Contributor

Can you please add an e2e test for the Table API support?

Bump: Here is an example. Note that keys need to be provided via IT_CASE_GLUE_SCHEMA_ACCESS_KEY and IT_CASE_GLUE_SCHEMA_SECRET_KEY. The Flink CI provides keys to run the tests

@MartijnVisser
Copy link
Contributor

@jherico @dannycranmer Is there anything we can do to move this PR forward?

@dannycranmer
Copy link
Contributor

@MartijnVisser we do not have capacity to pick it up right now. If we do not hear back from @jherico then we could potentially pick it up sometime before the 1.16 release

@jherico
Copy link
Author

jherico commented Apr 7, 2022

@dannycranmer if a create a GlueSchemaRegistryAvroKafkaITCase sibling to the existing test case you linked, should I refactor both to use a common base class, since almost all of the Avro-related code would be the same between the two cases, or would it be preferable to live with the duplicate code?

@jherico
Copy link
Author

jherico commented Apr 7, 2022

@dannycranmer having spent most of my working day on this today, I've come to the conclusion that this class looks to be a better example on which to pattern a new end-to-end test. Specifically, it does almost precisely what I want to do in my test, but uses the confluent registry, whereas the test you linked doesn't use the table API at all as far as I can tell. Please let me know if you concur.

@dannycranmer
Copy link
Contributor

Hello @jherico, sorry for delay. Yes I concur, the test I linked is based on DataStream API.

should I refactor both to use a common base class

If this is still a concern based on the recent question, yes please, removing duplicated code would be helpful.

@dannycranmer
Copy link
Contributor

This connector has been moved to https://github.com/apache/flink-connector-aws/tree/main/flink-formats-aws/flink-avro-glue-schema-registry. Closing PR. Please reopen targeting flink-connector-aws

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants