-
Notifications
You must be signed in to change notification settings - Fork 560
Terragrunt Stacks compatibility #1924
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @hndrewaall !! Wow interesting. Note I didn't look in depth at the RFC but going of your two questions regarding digger RE no.1 Unfortunately we don't support yet overriding the command that early. Would you consider running it as a step in the github action workflow prior to digger step? or they need to be run for every single terragrunt module? RE no.2 that is correct and it uses that to count number of resources changed. unfortunately we only currently support extra_args for plan: and apply: phases and don't have a way to override the show command. What you could do for a quick test is to update the args here to append the values mentioned. So perhaps something like this would work after update:
And you can reference your digger fork as an action step in an attempt to try it out Happy to support in any way to get your POC repo working! |
Thanks so much for the prompt reply! RE 1: That's what I've been doing (check my linked PR), but is it possible to work with the cloud version (not manual mode) this way? AFAICT the webhooks fire but my job is never invoked, though possibly I am doing something else wrong, as I haven't yet tried it with "vanilla" Terragrunt or plain TF. I've updated my PR to use the backend again and you can see what I mean. RE 2: I'll give it a shot, thanks! |
Update: RE 1: I tried vendoring the generated Terragrunt files, and I do now get Digger Cloud to parse as expected (though still with errors since it doesn't have the RE 2: This worked! I'm going to try opening a PR once I clean it up and can add some tests. |
Hi @hndrewaall nice, looking forward to it! |
@motatoes So I'm digging into this (WIP PR here), and I was wondering if you could make sure I'm not going down the wrong path before I invest a bunch more time. In particular, it seems to me that there are two main things (beyond the aforementioned arg hacking) that I need to update in digger to ensure compatibility with Stacks:
I am trying to avoid 2. for now by simply vendoring the generated stackfiles. In addressing 1. however I am still a bit stuck. I'm pretty new to Golang and only just read up on the |
Hmm I'm digging in a bit further and I'm seeing a bigger issue: in I see two ways forward:
I'm guessing 1. is ugly but easier to implement, though perhaps harder to maintain longer term. Would love to get your thoughts! |
TLDR I'm trying to figure out the following: if a user is using terragrunt stacks, does it mean the only thing they end up planning, applying destroying will be Hi @hndrewaall sorry for the delayed response! I could benefit from properly understanding terragrunt stacks as compared to what terragrunt modules are. From my basic understanding a stack groups together multiple terragrunt modules and you could deploy them together with a single command. Currently digger supports two ways of working with terragrunt modules (terragrunt.hcl files): Direct one to one mapping: projects:
- name: mytmodule
dir: projects/dev/myterragruntvpc/
terragrunt: true And the second is generation which will take an entire directory heirarchy traverse it and convert those to digger projects: generate_projects
blocks:
- block_name: dev
terragrunt: true
root_dir: "dev/" Which would traverse all terragrunt.hcl files under dev/ and find the terragrunt modules under there. Where I have some doubts in terms of terragrunt stacks is should we consider each terragrunt stack as a single unit (project) in digger, or should we do something smarter and traverse all modules under stacks. The simple thing would be to consider a stack as a deployable unit on its own like:
something like that would signal to digger to try to treat that project as a terragrunt stack which means that it would detect changes to the stack, and when changed we trigger The other option is to also support it with generation so something like:
Would also find terrgrunt.stack.hcl files and parse them to encorporate each terragrunt stack as a digger project, also respecting the underlying terragrunt.hcl modules to trigger the right stack when underlying module changes. I think we should support both approaches but perhaps we can start with simple one of a single stacks @hndrewaall before discussing how to support in digger would like to know what you see as the appropriate approach to supporting them in digger ? Do you have any different views on how it should be supported? Let me know! |
Thanks for getting back to me! So IIUC the idea of stacks is to be extremely backwards compatible with "vanilla" Terragrunt, and it exists largely as a mechanism to allow for easy templating of sets of "units" (what Terragrunt is calling instances of TF modules, and which currently roughly correspond to the "leaf" As such, under a stacks paradigm, we would still want to generate projects per unit, and be able to plan and apply on a per project basis as needed (with potential dependencies between the units). This is essentially all the same as Terragrunt is today. I've been working with the Some more context on our use case: Ultimately, the main thing I am trying to achieve is getting the plan/statefile sharding/dependency tree of Terragrunt, with the CommentOps of TACOS, and without the operational pain of Atlantis :) The motivation for stacks for us is mostly because in our current paradigm (currently vanilla TF), we have a nice "default" set of modules that we promote across various environments, and it seems that stackfiles are the only approach that maintains this while still providing plan sharding (our plans are already painfully large and slow). In vanilla Terragrunt (which I have used before successfully) there is otherwise some painful manual copying ("promotion") of units/modules upwards through environments. |
Note, by the way, that to achieve the environment "templating" approach I alluded to above, it is necessary to nest stackfiles (which is supported). The "inner" stackfile is the template, while the outer one instantiates each environment. In a multi-block paradigm, there could be multiple outer "instantiating" stackfiles. I have this working locally in a more or less satisfactory manner. Now it's just a matter of getting it working in Digger :) |
Current locally working Stackfile PoC repo: https://github.com/hndrewaall/terragrunt-stackfile-poc PR showing the vendoring of the stack generated output (which I'm using to test my Digger branch): hndrewaall/terragrunt-stackfile-poc#11 |
I am trying to get a proof of concept repo working with Digger and the new Terragrunt Stacks RFC.
So far I have run into two main issues:
terragrunt stack generate
before it traverses the tree. So far I have had to use manual mode, as I haven't found another way to have the runner callstack generate
before Digger begins to parse. Would it be possible to add configuration options to execute arbitrary scripts before project generation?terragrunt show
on the generated plans, but for this to work, I need to be able to inject the--experiment stacks stack run
prefix to the invocation. Is it possible to add extra args that will be used for ALL invocations of Terragrunt, even those not explicitly referenced in the workflow plan config?The text was updated successfully, but these errors were encountered: