forked from apache/flink
-
Notifications
You must be signed in to change notification settings - Fork 0
adfsdfas #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
adfsdfas #1
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…if join condition contains 'IS NOT DISTINCT FROM' Fix Flink-22098 caused by a mistake when rebasing This closes #15695
…TE TABLE' statement. This commit tries to 1. resolve the conflicts 2. revert the changes made on old planner 3. apply spotless formatting 4. fix DDL missing `TEMPORARY` keyword for temporary table 5. display table's full object path as catalog.db.table 6. support displaying the expanded query for view 7. add view test in CatalogTableITCase, and adapt sql client test to the new test framework 8. adapt docs This closes #13011
…t declare managed memory for Python operators This closes #15665.
… in Python DataStream API This closes #15702.
…precision isn't matching with declared DataType This closes #15689
…ute field This closes #15689
Additionally, set speed up the AdaptiveScheduler ITCases by configuring a very low jobmanager.adaptive-scheduler.resource-stabilization-timeout.
When this incorrect assertion is violated, the scheduler can trip into an unrecoverable failover loop.
…IncrementalAggregate.out After FLINK-22298 is finished, the ExecNode's id should always start from 1 in the json plan tests, while the testIncrementalAggregate.out was overrided by FLINK-20613 This closes #15698
… on persisting Exception can be thrown for example if task is being cancelled. This was leading to same buffer being recycled twice. Most of the times that was just leading to an IllegalReferenceCount being thrown, which was ignored, as this task was being cancelled. However on rare occasions this buffer could have been picked up by another task after being recycled for the first time, recycled second time and being picked up by another (third task). In that case we had two users of the same buffer, which could lead to all sort of data corruptions.
[FLINK-XXXX] Draft separation of leader election and creation of JobMasterService [FLINK-XXXX] Continued work on JobMasterServiceLeadershipRunner [FLINK-XXXX] Integrate RunningJobsRegistry, Cancelling state and termination future watching [FLINK-XXXX] Delete old JobManagerRunnerImpl classes [FLINK-22001] Add tests for DefaultJobMasterServiceProcess [FLINK-22001][hotfix] Clean up ITCase a bit [FLINK-22001] Add missing check for empty job graph [FLINK-22001] Rename JobMasterServiceFactoryNg to JobMasterServiceFactory This closes #15715.
…tor NOTICE file This closes #15706
… after execution failure. This closes #15713
…rrelate json serialization/deserialization This closes #15922.
…oupWindowAggregate json serialization/deserialization This closes #15934.
…g casting Compare children individually for anonymous structured types. This fixes issues with primitive fields and Scala case classes. This closes #15935.
… structured types
…lasses This removes the OrcTableSource and related classes including OrcInputFormat. Use the filesystem connector with a ORC format as a replacement. It is possible to read via Table & SQL API snd convert the Table to DataStream API if necessary. DataSet API is not supported anymore. This closes #15891.
…oupAggregate json serialization/deserialization This closes #15928.
…erAggregate json serialization/deserialization This closes #15937.
At the moment Flink only cleans up the ha data (e.g. K8s ConfigMaps, or Zookeeper nodes) while shutting down the cluster. This is not enough for a long running session cluster to which you submit multiple jobs. In this commit, we clean up the data for the particular job if it reaches a globally terminal state. This closes #15561.
…tApplyEvictorWindowStateReader
…elated classes This removes the ParquetTableSource and related classes including various ParquetInputFormats. Use the filesystem connector with a Parquet format as a replacement. It is possible to read via Table & SQL API and convert the Table to DataStream API if necessary. DataSet API is not supported anymore. This closes #15895.
The HA e2e test will start a Flink application first and wait for three successful checkpoints. Then kill the JobManager. A new JobManager should be launched and recover the job from latest successful checkpoint. Finally, cancel the job and all the K8s resources should be cleaned up automatically. This closes #14172.
RetrievableStateStorageHelper is not only used by ZooKeeperStateHandleStore but also by KubernetesStateHandleStore.
We experienced cases where the ConfigMap was updated but the corresponding HTTP request failed due to connectivity issues. PossibleInconsistentStateException is used to reflect cases where it's not clear whether the data was actually written or not.
…n references The previous implementation stored the state in the StateHandle. This causes problems when deserializing the state creating a new instance that does not point to the actual state but is a copy of this state. This refactoring introduces LongStateHandle handling the actual state and LongRetrievableStateHandle referencing this handle.
…scardOnFailedStoring
…ng to CheckpointCoordinator This closes #15832.
…alGroupWindowAggregate which only supports insert-only input node This closes #14830
… and related classes This removes the HBaseTableSource/Sink and related classes including various HBaseInputFormats and HBaseSinkFunction. It is possible to read via Table & SQL API and convert the Table to DataStream API (or vice versa) if necessary. DataSet API is not supported anymore. This closes #15905.
In order to better clean up job specific HA services, this commit changes the layout of the zNode structure so that the JobMaster leader, checkpoints and checkpoint counter is now grouped below the jobs/ zNode. Moreover, this commit groups the leaders of the cluster components (Dispatcher, ResourceManager, RestServer) under /leader/process/latch and /leader/process/connection-info. This closes #15893.
Owner
Author
|
qw |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What is the purpose of the change
(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)
Brief change log
(for example:)
Verifying this change
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Does this pull request potentially affect one of the following parts:
@Public(Evolving): (yes / no)Documentation