Replies: 1 comment
-
Wow, is is a bunch of questions. It's better to limit the questions to one topic, otherwise you won't get much answers or even worse it's hard to have a red line in follow up questions. Anyways, Inlined, you'll find some short answers.
This looks like a bug worth investigating. You best open an issue with instructions for reproduction. Or check if it's already reported and fixed:
Everything Mill is doing in BSP mode was requested by the BSP client. Generated sources should only regenerated, if their inputs change. If the dependencies of the generated sources change, it is correct to regenerate them, since Mill can't "assume" it's safe to skip it. You always can implement more complex logic with an persistent task. Mill already tries to minimize the impact of changes to the module definitions on it's caches, but when in doubt, it will always prefer correct over fast. There is probably room for improvements, but this is best discussed in a dedicated topic.
Most forked processes, esp. those from Also, when using
This is currently not possible.
ScalaPB being a contrib module means, it is maintained by the community. There is no guarantee those modules are as polished and optimized as the core modules.
Using a worker over an
No. Your understanding seems correct.
No reason beside, nobody did it yet. Have a look at other worker in Mill's core modules, and you can see that many of them do as you suggested, and are created with specific classloaders.
I suggest reading this sections of the documentation: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hopefully this is the right forum to ask, but I had a bunch of questions about mill and it's usage:
Sometimes when I run a compilation in Mill, it ends like this:
It looks like everything "works," and it exits with 0 code, but some tasks seem incomplete.
If I run
__.compile
again, I usually see:without any actual tasks running. Sometimes it takes multiple attempts to reach 12856, but the logs don’t show which tasks are being executed incrementally. Also, the behavior is not consistent. Sometimes, all tasks complete. Other times a different number of them will complete.
Is this just a visual issue? Is there a way to find out which tasks were skipped in the first run (e.g., the 13 tasks missing from
12843/12856
)?I tried to run with --debug mode as well, but it didn't show anything useful.
if we do a
mill resolve _
it shows all of the project tasks, but it doesn't show any of mill tasks. For example soemthing like
mill.idea.GenIdea/
is not shown there (found only on the github discussions). I tried things likemill resolve mill._
but it didn't work. What is the correct way to discover those kinds of internal tasks?When IntelliJ does a BSP sync, it seems to trigger a full project compilation, including generated sources. This makes pulling new changes from
master
very slow.Is there a way to configure it so IntelliJ just detects sources and indexes them, without forcing a full recompile? With Gradle, you can still work on other modules even if some are broken due to missing generated sources.
Say I have a project with:
I run
__.compile
.Then I make add any mvnDep for example (but it seems to be due to any change at all) to module B.
When I run
__.compile
again, Module A recompiles and regenerates generated sources, even though it hasn’t changed.And based on the previous question, the BSP install in intellij runs a full compile every time, so this is even more costly.
It seems that changing any build configuration in any package.mill invalidates ALL the caches, and causes full recompile. This is in contrast to gradle, that only the changed/dependent modules will need recompile. Is there a way to avoid this? In a large project this can be extremely expensive.
In Gradle, I can do something like
new File("abc.txt")
, withabc.txt
next tobuild.gradle
, by setting-Duser.dir=$projectDir
. This works fine.In Mill, I set the same
-Duser.dir=$moduleDir
option inforkArgs
, but the file can’t be read. Thefile.getCanonicalPath
looks correct, but reading fails. I also tried--no-filesystem-checker
in case sandboxing was the issue, but that didn’t help.What am I missing here?
Is there a way to split the output folder per module? I know the output folder can be changed globally, but I haven’t seen an option to separate it by module. Within the out folder, it's already split by module, so basically it be ideal to collocate those with the actual module itself. With many modules, the output folder can get very large.
While learning Mill, I looked at the ScalaPB contrib implementation. There’s this code:
From what I see,
ScalaPBWorker
is stateless and isn't expensive to instantiate, making it benefit from being a worker. It seems like its methods could just be in anobject
, or could live directly inScalaPBModule
, without a worker at all.By contrast, this example makes sense:
since creating a classloader is expensive and reusable.
Am I misunderstanding how
Task.Worker
should be used? Why not pass reusable state (like a classloader) as a constructor parameter to theScalaPBWorker
so it’s only initialized once?Similarly, we have something like
def scalaPBUnpackProto: T[PathRef]
in theScalaPBModule
. So does this mean that each and every module that use scalapb will have it's own copy ofscalaPBUnpackProto
andscalaPBClasspath
? If we have 100s of modules, wouldn't it be very expensive?ScalaPBWorker
where the ClassLoader/unpacking was done only once across all modules, how could we go about it? What would have been nice to do is something likeand have mill automatically create an in memoery cache of (Function Arugments -> Task.Worker), and allow different modules who use the same version, to share the same instance of the Worker. But since Worker task definitions cannot have input parameters, it doesn't work. What is correct way to go about that?
Beta Was this translation helpful? Give feedback.
All reactions