A bit of shitposting today. These days everyone and their grandma are using infra as code or more likely Terraform to deploy infrastructure, whether its AWS, Azure, GCP, you name it. As time goes your infrastucture might get larger (or bloated if you will) and you’ll end up with spliting things up anyhow.
But what happens if we just YOLO everything into a single .tf file? Just for fun. Here’s how to do it.
the command
Open your terminal in your Terraform environment and run the following command:
find . -name "*.tf" -type f -exec cat {} + > tf_yolo_file.tf
Now, if you want to count the line numbers so you can brag that it only took 5k lines of code to deploy an EKS cluster for a simple “Hello World” web app, run:
wc -l < tf_yolo_file.tf
performance impact
Will it impact the speed of Terraform deployment? Well, in my case with about 50-ish resources, the single gigachad file was actually faster by a second, so… no. It should be insignificant for thousands of resources too. I mean, it’s the same dependency graph, same resources, same number of API calls. Maybe the parsing is faster? Not sure.
Does it make sense though? Probably not. A large file can impact productivity for sure, IDE performance (memory?), Git operations.
bottom line
It’s always better to split Terraform files for better usability, readability, and maintainability. This is more about goofing around and shitposting than actual best practices.
Still monitoring your infra? Check out justanotheruptime.com for a “better” monitoring solution :)