How Digger helps Employ manage Terraform at scale

How Digger helps Employ manage Terraform at scale

With Digger, we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like atlantis, for example is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. This is something I would miss if we had to, for some reason, stop using Digger today.

With Digger, we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like atlantis, for example is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. This is something I would miss if we had to, for some reason, stop using Digger today.

With Digger, we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like atlantis, for example is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. This is something I would miss if we had to, for some reason, stop using Digger today.

With Digger, we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like atlantis, for example is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. This is something I would miss if we had to, for some reason, stop using Digger today.

With Digger, we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like atlantis, for example is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. This is something I would miss if we had to, for some reason, stop using Digger today.

With Digger, we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like atlantis, for example is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. This is something I would miss if we had to, for some reason, stop using Digger today.

Tell us about your terraform automation tooling before Digger.

Tell us about your terraform automation tooling before Digger.

This varied by team and organization, and in some cases, is actually worse because we are not all on Terraform for how we’ve built/designed our infrastructure. Across the Employ products, we have four different tools for managing Infrastructure as Code. Some were manual, some automated, and some were maintenance challenges due to building the integrations ourselves. For the Terraform specific setups, generally we have had Atlantis used in a few places but ran into issues with the 1.8.2 changes in the binary’s for Terraform that required us to lock to 1.8.1 as the max version. We also experimented with Terraform cloud implementations but as with many organizations the changes to licensing and cost structure would have resulted in a large increase in our total costs and so we had to pivot away from those implementations. For our JazzHR components, it was a manual execution on one of our Engineers machines to get deployments done.

This varied by team and organization, and in some cases, is actually worse because we are not all on Terraform for how we’ve built/designed our infrastructure. Across the Employ products, we have four different tools for managing Infrastructure as Code. Some were manual, some automated, and some were maintenance challenges due to building the integrations ourselves. For the Terraform specific setups, generally we have had Atlantis used in a few places but ran into issues with the 1.8.2 changes in the binary’s for Terraform that required us to lock to 1.8.1 as the max version. We also experimented with Terraform cloud implementations but as with many organizations the changes to licensing and cost structure would have resulted in a large increase in our total costs and so we had to pivot away from those implementations. For our JazzHR components, it was a manual execution on one of our Engineers machines to get deployments done.

This varied by team and organization, and in some cases, is actually worse because we are not all on Terraform for how we’ve built/designed our infrastructure. Across the Employ products, we have four different tools for managing Infrastructure as Code. Some were manual, some automated, and some were maintenance challenges due to building the integrations ourselves. For the Terraform specific setups, generally we have had Atlantis used in a few places but ran into issues with the 1.8.2 changes in the binary’s for Terraform that required us to lock to 1.8.1 as the max version. We also experimented with Terraform cloud implementations but as with many organizations the changes to licensing and cost structure would have resulted in a large increase in our total costs and so we had to pivot away from those implementations. For our JazzHR components, it was a manual execution on one of our Engineers machines to get deployments done.

This varied by team and organization, and in some cases, is actually worse because we are not all on Terraform for how we’ve built/designed our infrastructure. Across the Employ products, we have four different tools for managing Infrastructure as Code. Some were manual, some automated, and some were maintenance challenges due to building the integrations ourselves. For the Terraform specific setups, generally we have had Atlantis used in a few places but ran into issues with the 1.8.2 changes in the binary’s for Terraform that required us to lock to 1.8.1 as the max version. We also experimented with Terraform cloud implementations but as with many organizations the changes to licensing and cost structure would have resulted in a large increase in our total costs and so we had to pivot away from those implementations. For our JazzHR components, it was a manual execution on one of our Engineers machines to get deployments done.

This varied by team and organization, and in some cases, is actually worse because we are not all on Terraform for how we’ve built/designed our infrastructure. Across the Employ products, we have four different tools for managing Infrastructure as Code. Some were manual, some automated, and some were maintenance challenges due to building the integrations ourselves. For the Terraform specific setups, generally we have had Atlantis used in a few places but ran into issues with the 1.8.2 changes in the binary’s for Terraform that required us to lock to 1.8.1 as the max version. We also experimented with Terraform cloud implementations but as with many organizations the changes to licensing and cost structure would have resulted in a large increase in our total costs and so we had to pivot away from those implementations. For our JazzHR components, it was a manual execution on one of our Engineers machines to get deployments done.

This varied by team and organization, and in some cases, is actually worse because we are not all on Terraform for how we’ve built/designed our infrastructure. Across the Employ products, we have four different tools for managing Infrastructure as Code. Some were manual, some automated, and some were maintenance challenges due to building the integrations ourselves. For the Terraform specific setups, generally we have had Atlantis used in a few places but ran into issues with the 1.8.2 changes in the binary’s for Terraform that required us to lock to 1.8.1 as the max version. We also experimented with Terraform cloud implementations but as with many organizations the changes to licensing and cost structure would have resulted in a large increase in our total costs and so we had to pivot away from those implementations. For our JazzHR components, it was a manual execution on one of our Engineers machines to get deployments done.

Walk us through the experience taking Digger into production.

Walk us through the experience taking Digger into production.

The process was actually really pretty straightforward for us. We started in the 0.1.x timeframe, but even then, there was a well-defined GitHub action, and with a few questions, we were able to get a basic setup into place quickly from a POC behavior. With this POC complete, we then directly started adding the required assume role behaviors beyond the POC account and enabling the deployment and management across the different non production and production needs. This was the one bit of effort done outside of Digger self-management to get the roles setup. Tying in with dependabot/renovatebot, we ensured that as new releases come out, we are able to provide the required access. The biggest item we needed to work on was the multi account behaviors, where we store our state in one account and bucket, while the resources are created in another. However, it’s a common pattern and the wildcard for directories support made that extremely easy to do. Along the way, as we worked and have areas where we had to update large amounts of stacks (module changes), we saw the extension of time for downloading all the modules and plugins required. As a result, we worked with the Digger team and community to build/test and extend the GitHub action to support caching of the modules via GitHub’s cache functionality, and reduce significantly the time that upgrades can take through preexisting dependencies already being in the cache directories.

The process was actually really pretty straightforward for us. We started in the 0.1.x timeframe, but even then, there was a well-defined GitHub action, and with a few questions, we were able to get a basic setup into place quickly from a POC behavior. With this POC complete, we then directly started adding the required assume role behaviors beyond the POC account and enabling the deployment and management across the different non production and production needs. This was the one bit of effort done outside of Digger self-management to get the roles setup. Tying in with dependabot/renovatebot, we ensured that as new releases come out, we are able to provide the required access. The biggest item we needed to work on was the multi account behaviors, where we store our state in one account and bucket, while the resources are created in another. However, it’s a common pattern and the wildcard for directories support made that extremely easy to do. Along the way, as we worked and have areas where we had to update large amounts of stacks (module changes), we saw the extension of time for downloading all the modules and plugins required. As a result, we worked with the Digger team and community to build/test and extend the GitHub action to support caching of the modules via GitHub’s cache functionality, and reduce significantly the time that upgrades can take through preexisting dependencies already being in the cache directories.

The process was actually really pretty straightforward for us. We started in the 0.1.x timeframe, but even then, there was a well-defined GitHub action, and with a few questions, we were able to get a basic setup into place quickly from a POC behavior. With this POC complete, we then directly started adding the required assume role behaviors beyond the POC account and enabling the deployment and management across the different non production and production needs. This was the one bit of effort done outside of Digger self-management to get the roles setup. Tying in with dependabot/renovatebot, we ensured that as new releases come out, we are able to provide the required access. The biggest item we needed to work on was the multi account behaviors, where we store our state in one account and bucket, while the resources are created in another. However, it’s a common pattern and the wildcard for directories support made that extremely easy to do. Along the way, as we worked and have areas where we had to update large amounts of stacks (module changes), we saw the extension of time for downloading all the modules and plugins required. As a result, we worked with the Digger team and community to build/test and extend the GitHub action to support caching of the modules via GitHub’s cache functionality, and reduce significantly the time that upgrades can take through preexisting dependencies already being in the cache directories.

The process was actually really pretty straightforward for us. We started in the 0.1.x timeframe, but even then, there was a well-defined GitHub action, and with a few questions, we were able to get a basic setup into place quickly from a POC behavior. With this POC complete, we then directly started adding the required assume role behaviors beyond the POC account and enabling the deployment and management across the different non production and production needs. This was the one bit of effort done outside of Digger self-management to get the roles setup. Tying in with dependabot/renovatebot, we ensured that as new releases come out, we are able to provide the required access. The biggest item we needed to work on was the multi account behaviors, where we store our state in one account and bucket, while the resources are created in another. However, it’s a common pattern and the wildcard for directories support made that extremely easy to do. Along the way, as we worked and have areas where we had to update large amounts of stacks (module changes), we saw the extension of time for downloading all the modules and plugins required. As a result, we worked with the Digger team and community to build/test and extend the GitHub action to support caching of the modules via GitHub’s cache functionality, and reduce significantly the time that upgrades can take through preexisting dependencies already being in the cache directories.

The process was actually really pretty straightforward for us. We started in the 0.1.x timeframe, but even then, there was a well-defined GitHub action, and with a few questions, we were able to get a basic setup into place quickly from a POC behavior. With this POC complete, we then directly started adding the required assume role behaviors beyond the POC account and enabling the deployment and management across the different non production and production needs. This was the one bit of effort done outside of Digger self-management to get the roles setup. Tying in with dependabot/renovatebot, we ensured that as new releases come out, we are able to provide the required access. The biggest item we needed to work on was the multi account behaviors, where we store our state in one account and bucket, while the resources are created in another. However, it’s a common pattern and the wildcard for directories support made that extremely easy to do. Along the way, as we worked and have areas where we had to update large amounts of stacks (module changes), we saw the extension of time for downloading all the modules and plugins required. As a result, we worked with the Digger team and community to build/test and extend the GitHub action to support caching of the modules via GitHub’s cache functionality, and reduce significantly the time that upgrades can take through preexisting dependencies already being in the cache directories.

The process was actually really pretty straightforward for us. We started in the 0.1.x timeframe, but even then, there was a well-defined GitHub action, and with a few questions, we were able to get a basic setup into place quickly from a POC behavior. With this POC complete, we then directly started adding the required assume role behaviors beyond the POC account and enabling the deployment and management across the different non production and production needs. This was the one bit of effort done outside of Digger self-management to get the roles setup. Tying in with dependabot/renovatebot, we ensured that as new releases come out, we are able to provide the required access. The biggest item we needed to work on was the multi account behaviors, where we store our state in one account and bucket, while the resources are created in another. However, it’s a common pattern and the wildcard for directories support made that extremely easy to do. Along the way, as we worked and have areas where we had to update large amounts of stacks (module changes), we saw the extension of time for downloading all the modules and plugins required. As a result, we worked with the Digger team and community to build/test and extend the GitHub action to support caching of the modules via GitHub’s cache functionality, and reduce significantly the time that upgrades can take through preexisting dependencies already being in the cache directories.

What would you miss the most if you had to stop using Digger?

What would you miss the most if you had to stop using Digger?

There’s a few things that would miss. First and foremost would be the drift detection behaviors. While everyone has good intentions, it’s unfortunately not as uncommon as any of us would like for there to be differences between the real state of the resources and what is in code. Having an automation to help us identify those drifts and work to ensure that all code and resources are aligned has been wonderful. It has also removed many potential delays in delivery due to “oh there’s a bigger set of changes than I was expecting for this work I was doing.” Second would be the lack of needing to maintain a server. Atlantis is a wonderful project, but as teams have pressure on them to do more with less, running systems and servers, and dealing with potential failure modes, such as a full disk, just take away from other critical and important activities. As mentioned earlier, renovatebot/dependabot we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like Atlantis is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. So, eliminating all that would be something I would certainly miss.

There’s a few things that would miss. First and foremost would be the drift detection behaviors. While everyone has good intentions, it’s unfortunately not as uncommon as any of us would like for there to be differences between the real state of the resources and what is in code. Having an automation to help us identify those drifts and work to ensure that all code and resources are aligned has been wonderful. It has also removed many potential delays in delivery due to “oh there’s a bigger set of changes than I was expecting for this work I was doing.” Second would be the lack of needing to maintain a server. Atlantis is a wonderful project, but as teams have pressure on them to do more with less, running systems and servers, and dealing with potential failure modes, such as a full disk, just take away from other critical and important activities. As mentioned earlier, renovatebot/dependabot we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like Atlantis is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. So, eliminating all that would be something I would certainly miss.

There’s a few things that would miss. First and foremost would be the drift detection behaviors. While everyone has good intentions, it’s unfortunately not as uncommon as any of us would like for there to be differences between the real state of the resources and what is in code. Having an automation to help us identify those drifts and work to ensure that all code and resources are aligned has been wonderful. It has also removed many potential delays in delivery due to “oh there’s a bigger set of changes than I was expecting for this work I was doing.” Second would be the lack of needing to maintain a server. Atlantis is a wonderful project, but as teams have pressure on them to do more with less, running systems and servers, and dealing with potential failure modes, such as a full disk, just take away from other critical and important activities. As mentioned earlier, renovatebot/dependabot we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like Atlantis is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. So, eliminating all that would be something I would certainly miss.

There’s a few things that would miss. First and foremost would be the drift detection behaviors. While everyone has good intentions, it’s unfortunately not as uncommon as any of us would like for there to be differences between the real state of the resources and what is in code. Having an automation to help us identify those drifts and work to ensure that all code and resources are aligned has been wonderful. It has also removed many potential delays in delivery due to “oh there’s a bigger set of changes than I was expecting for this work I was doing.” Second would be the lack of needing to maintain a server. Atlantis is a wonderful project, but as teams have pressure on them to do more with less, running systems and servers, and dealing with potential failure modes, such as a full disk, just take away from other critical and important activities. As mentioned earlier, renovatebot/dependabot we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like Atlantis is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. So, eliminating all that would be something I would certainly miss.

There’s a few things that would miss. First and foremost would be the drift detection behaviors. While everyone has good intentions, it’s unfortunately not as uncommon as any of us would like for there to be differences between the real state of the resources and what is in code. Having an automation to help us identify those drifts and work to ensure that all code and resources are aligned has been wonderful. It has also removed many potential delays in delivery due to “oh there’s a bigger set of changes than I was expecting for this work I was doing.” Second would be the lack of needing to maintain a server. Atlantis is a wonderful project, but as teams have pressure on them to do more with less, running systems and servers, and dealing with potential failure modes, such as a full disk, just take away from other critical and important activities. As mentioned earlier, renovatebot/dependabot we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like Atlantis is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. So, eliminating all that would be something I would certainly miss.

There’s a few things that would miss. First and foremost would be the drift detection behaviors. While everyone has good intentions, it’s unfortunately not as uncommon as any of us would like for there to be differences between the real state of the resources and what is in code. Having an automation to help us identify those drifts and work to ensure that all code and resources are aligned has been wonderful. It has also removed many potential delays in delivery due to “oh there’s a bigger set of changes than I was expecting for this work I was doing.” Second would be the lack of needing to maintain a server. Atlantis is a wonderful project, but as teams have pressure on them to do more with less, running systems and servers, and dealing with potential failure modes, such as a full disk, just take away from other critical and important activities. As mentioned earlier, renovatebot/dependabot we get notified of a new version of the digger workflow/image and it becomes an easy quick upgrade. Without these behaviors, updating applications like Atlantis is a potential project and requires heavier testing/validation to ensure all is working and is something as a team you have to remember. So, eliminating all that would be something I would certainly miss.

How was the help when you ran into problems?

How was the help when you ran into problems?

Support has been great and many of the core developers we have worked with on various behaviors/problems have included setup to debugging/triaging different issues. It’s been a very collaborative setup and design for both sides and why it was so easy to work on things to add additional capabilities into the product that not only help us as a customer but all customers in general have the best product possible. Other community members in the Slack also help provide details/information that have provided tips/hints and has the feelings of an active/vibrant community that ultimately wants everyone to succeed

Support has been great and many of the core developers we have worked with on various behaviors/problems have included setup to debugging/triaging different issues. It’s been a very collaborative setup and design for both sides and why it was so easy to work on things to add additional capabilities into the product that not only help us as a customer but all customers in general have the best product possible. Other community members in the Slack also help provide details/information that have provided tips/hints and has the feelings of an active/vibrant community that ultimately wants everyone to succeed

Support has been great and many of the core developers we have worked with on various behaviors/problems have included setup to debugging/triaging different issues. It’s been a very collaborative setup and design for both sides and why it was so easy to work on things to add additional capabilities into the product that not only help us as a customer but all customers in general have the best product possible. Other community members in the Slack also help provide details/information that have provided tips/hints and has the feelings of an active/vibrant community that ultimately wants everyone to succeed

Support has been great and many of the core developers we have worked with on various behaviors/problems have included setup to debugging/triaging different issues. It’s been a very collaborative setup and design for both sides and why it was so easy to work on things to add additional capabilities into the product that not only help us as a customer but all customers in general have the best product possible. Other community members in the Slack also help provide details/information that have provided tips/hints and has the feelings of an active/vibrant community that ultimately wants everyone to succeed

Support has been great and many of the core developers we have worked with on various behaviors/problems have included setup to debugging/triaging different issues. It’s been a very collaborative setup and design for both sides and why it was so easy to work on things to add additional capabilities into the product that not only help us as a customer but all customers in general have the best product possible. Other community members in the Slack also help provide details/information that have provided tips/hints and has the feelings of an active/vibrant community that ultimately wants everyone to succeed

Support has been great and many of the core developers we have worked with on various behaviors/problems have included setup to debugging/triaging different issues. It’s been a very collaborative setup and design for both sides and why it was so easy to work on things to add additional capabilities into the product that not only help us as a customer but all customers in general have the best product possible. Other community members in the Slack also help provide details/information that have provided tips/hints and has the feelings of an active/vibrant community that ultimately wants everyone to succeed

Describe Digger to a friend in one sentence.

Describe Digger to a friend in one sentence.

“Serverless Infrastructure as Code to simplify and realize gitops practices with ease”

Joshua Jackson

Senior Director of Engineering

Employ Inc. provides people-first recruiting solutions that empower companies to overcome their greatest hiring challenges. Serving SMBs to global enterprises, Employ is the only company to provide personalized choice in its hiring solutions. Together, Employ and its brands (JazzHR, Lever, Jobvite and NXTThing RPO) serve more than 22,000 customers across industries. For more information, visit www.employinc.com.

Employ Inc. provides people-first recruiting solutions that empower companies to overcome their greatest hiring challenges. Serving SMBs to global enterprises, Employ is the only company to provide personalized choice in its hiring solutions. Together, Employ and its brands (JazzHR, Lever, Jobvite and NXTThing RPO) serve more than 22,000 customers across industries. For more information, visit www.employinc.com.

Employ Inc. provides people-first recruiting solutions that empower companies to overcome their greatest hiring challenges. Serving SMBs to global enterprises, Employ is the only company to provide personalized choice in its hiring solutions. Together, Employ and its brands (JazzHR, Lever, Jobvite and NXTThing RPO) serve more than 22,000 customers across industries. For more information, visit www.employinc.com.

Employ Inc. provides people-first recruiting solutions that empower companies to overcome their greatest hiring challenges. Serving SMBs to global enterprises, Employ is the only company to provide personalized choice in its hiring solutions. Together, Employ and its brands (JazzHR, Lever, Jobvite and NXTThing RPO) serve more than 22,000 customers across industries. For more information, visit www.employinc.com.

Employ Inc. provides people-first recruiting solutions that empower companies to overcome their greatest hiring challenges. Serving SMBs to global enterprises, Employ is the only company to provide personalized choice in its hiring solutions. Together, Employ and its brands (JazzHR, Lever, Jobvite and NXTThing RPO) serve more than 22,000 customers across industries. For more information, visit www.employinc.com.

Organization size:

785

Engineering team size:

100+

Employ Inc. provides people-first recruiting solutions that empower companies to overcome their greatest hiring challenges. Serving SMBs to global enterprises, Employ is the only company to provide personalized choice in its hiring solutions. Together, Employ and its brands (JazzHR, Lever, Jobvite and NXTThing RPO) serve more than 22,000 customers across industries. For more information, visit www.employinc.com.

Organization size:

785

Engineering team size:

100+

Open Source
Terraform Orchestration for Teams.

Open Source
Terraform Orchestration for Teams.

Automation, Collaboration and Governance for Terraform within your CI/CD system