Skip to content

Commit f7a6299

Browse files
committed
address comments
1 parent d25e846 commit f7a6299

File tree

1 file changed

+5
-1
lines changed
  • sql/catalyst/src/main/scala/org/apache/spark/sql/internal

1 file changed

+5
-1
lines changed

sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,9 @@ object SQLConf {
9595

9696
/**
9797
* Returns the active config object within the current scope. If there is an active SparkSession,
98-
* the proper SQLConf associated with the thread's session is used.
98+
* the proper SQLConf associated with the thread's active session is used. If it's called from
99+
* tasks in the executor side, a SQLConf will be created from job local properties, which are set
100+
* and propagated from the driver side.
99101
*
100102
* The way this works is a little bit convoluted, due to the fact that config was added initially
101103
* only for physical plans (and as a result not in sql/catalyst module).
@@ -112,6 +114,8 @@ object SQLConf {
112114
new ReadOnlySQLConf(TaskContext.get())
113115
} else {
114116
if (Utils.isTesting && SparkContext.getActive.isDefined) {
117+
// DAGScheduler event loop thread does not have an active SparkSession, the `confGetter`
118+
// will return `fallbackConf` which is unexpected. Here we prevent it from happening.
115119
val schedulerEventLoopThread =
116120
SparkContext.getActive.get.dagScheduler.eventProcessLoop.eventThread
117121
if (schedulerEventLoopThread.getId == Thread.currentThread().getId) {

0 commit comments

Comments
 (0)