Terraform S3 Lockfile: The New Way to Lock State Without DynamoDB

For years, the recommended way to securely manage Terraform state in AWS was to store state in S3 and use DynamoDB for locking. This worked well, but it also meant extra AWS resources, extra IAM permissions, and more setup overhead.
Now that’s changing.
With the latest Terraform updates, the S3 backend supports its own native lockfile mechanism. No DynamoDB required.
Let’s dive into what this means, how it works, and how you can migrate.
Why Terraform Needs State Locking
Terraform stores information about your infrastructure in a state file (terraform.tfstate
). When you run terraform apply
, Terraform updates this state file after making changes in the cloud.
But what happens if two engineers (or a CI/CD pipeline + an engineer) run apply
at the same time?
They both read the “old” state.
They both make changes.
They overwrite each other’s updates.
The result? State corruption.
That’s why Terraform implements state locking: only one run can hold the lock at a time.
The Old Way: DynamoDB Locking
Traditionally, we solved this by:
Storing state in S3
Using a DynamoDB table for locks
Terraform would insert a lock entry in DynamoDB at the start of a run, and delete it when finished. This worked, but it required:
Creating and maintaining a DynamoDB table
Adding IAM permissions for DynamoDB
Paying a tiny but nonzero cost for DynamoDB
The New Way: S3 Lockfile
Terraform now supports S3-native locking. Instead of DynamoDB, it uses a .tflock
object stored in the same bucket alongside your state file.
Example:
my-bucket/
└── envs/prod/network/terraform.tfstate
└── envs/prod/network/terraform.tfstate.tflock 👈 new lockfile
When you run terraform apply
:
Terraform creates the
.tflock
file.
If another process tries to run, it fails because the lockfile exists.
When the run finishes, Terraform deletes the .tflock
file.
Simple. Reliable. No DynamoDB.
How to Enable S3 Lockfile
In your Terraform backend configuration:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "envs/prod/network/terraform.tfstate"
region = "ap-south-1"
encrypt = true
use_lockfile = true # 👈 enables the new lockfile mechanism
}
}
Re-initialize:
terraform init -migrate-state
IAM Permissions for S3 Lockfile
In addition to normal S3 permissions for the state file, you’ll need Get/Put/Delete on the .tflock
object.
Here’s a minimal IAM policy snippet:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-terraform-state/envs/prod/network/terraform.tfstate.tflock"
}
Don’t forget to also allow access to the state file itself.
Migrating From DynamoDB Locking
If your backend currently looks like this:
backend "s3" {
bucket = "my-terraform-state"
key = "envs/prod/network/terraform.tfstate"
region = "ap-south-1"
dynamodb_table = "terraform-locks" # old way
}
Change it to:
backend "s3" {
bucket = "my-terraform-state"
key = "envs/prod/network/terraform.tfstate"
region = "ap-south-1"
use_lockfile = true
}
Then run:
terraform init -migrate-state
Terraform can temporarily use both DynamoDB and lockfile, but DynamoDB locking is now deprecated and will be removed in a future release. Plan to migrate sooner rather than later.
Best Practices
Enable S3 Versioning → Always turn on bucket versioning for your state bucket. This lets you roll back if state corruption happens.
Force Unlock → If a run dies and the .tflock
file remains, use terraform force-unlock <LOCK_ID>
.
CI/CD Discipline → Let your CI/CD pipelines handle terraform apply
, and keep local runs for plan
to avoid collisions.
Restrict IAM → Make sure only trusted CI roles or engineers can access both state and lock files.
Comments (0)
No comments yet. Be the first to share your thoughts!