S3 and RDS are the two AWS data services that turn up in almost every workload, and they’re also the two where misconfiguration causes the most public pain. Most of the well-publicised cloud breaches over the last few years involve one or both. The good news is that the controls needed to lock them down properly are well understood; the harder part is making those controls the default in your organisation rather than something each team has to remember.
This post covers the Terraform module patterns I’d actually use for secure-by-default S3 buckets and RDS instances in a multi-account AWS estate. The aim isn’t to enumerate every option — it’s to show the shape of a module that gets the security defaults right and lets engineering teams self-serve without re-litigating the basics every time.
The principle: secure defaults, opinionated modules
If your security model relies on engineers reading documentation and remembering to enable encryption, you’ve already lost. The modules teams consume should make secure configurations the path of least resistance and insecure configurations either impossible or loud. That means:
- Required parameters force teams to make the security-relevant decisions explicitly (KMS key, retention period, deletion protection)
- Defaults are conservative — encrypted, private, logged, backed up
- Variables that could weaken security (public access, deletion bypass, unencrypted) either don’t exist or trigger Sentinel/OPA policy checks at plan time
- The module is versioned and consumed via a registry, so security updates propagate
This is not novel — it’s the same pattern Hashicorp, Gruntwork, and AWS have been advocating for years — but it’s still surprisingly uncommon in practice.
A secure S3 bucket module
Here’s the rough shape of a module I’d use as a foundation. This is illustrative rather than copy-paste production code, but the structure is what matters.
variable "bucket_name" {
type = string
description = "Bucket name. Must be globally unique."
}
variable "kms_key_arn" {
type = string
description = "KMS key ARN for SSE-KMS encryption. Customer-managed keys only."
}
variable "lifecycle_transitions" {
type = list(object({
days = number
storage_class = string
}))
default = [
{ days = 90, storage_class = "STANDARD_IA" },
{ days = 365, storage_class = "GLACIER_IR" },
]
}
resource "aws_s3_bucket" "this" {
bucket = var.bucket_name
}
resource "aws_s3_bucket_public_access_block" "this" {
bucket = aws_s3_bucket.this.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
kms_master_key_id = var.kms_key_arn
}
bucket_key_enabled = true
}
}
resource "aws_s3_bucket_versioning" "this" {
bucket = aws_s3_bucket.this.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_logging" "this" {
bucket = aws_s3_bucket.this.id
target_bucket = var.access_log_bucket
target_prefix = "${var.bucket_name}/"
}
resource "aws_s3_bucket_lifecycle_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
id = "transitions"
status = "Enabled"
dynamic "transition" {
for_each = var.lifecycle_transitions
content {
days = transition.value.days
storage_class = transition.value.storage_class
}
}
noncurrent_version_expiration {
noncurrent_days = 90
}
abort_incomplete_multipart_upload {
days_after_initiation = 7
}
}
}Code language: PHP (php)
A few specific decisions worth calling out:
Customer-managed KMS key is required, not optional. SSE-S3 (AES-256 with AWS-managed keys) is the easy default and it’s adequate for many workloads, but it gives you no key policy to audit. For any data that’s worth thinking about, customer-managed KMS keys give you a clear access boundary and CloudTrail visibility on every key use. The module makes this required, which forces the conversation.
Public access block is non-negotiable. All four settings on, all the time. If a workload genuinely needs public read access — say, a static asset bucket fronted by CloudFront — that’s a different module with a different name (s3_public_assets or similar), and the policy review for using it is much stricter. Don’t let one module serve both private data and public assets.
Versioning is on by default. This is the single most useful feature for ransomware resilience. Combined with MFA delete on critical buckets and a lifecycle rule expiring noncurrent versions after a sensible window, you get a recoverable position without unbounded storage costs.
Bucket logging is on by default. Every bucket logs to a central access log bucket. The access log bucket itself is in a separate account in the security/logging OU, with object lock enabled and write-only IAM permissions for the source accounts. This means even an attacker with full admin in a workload account can’t tamper with the logs.
What I’d add to this in production:
- Object Lock for buckets holding immutable data (compliance archives, audit logs, immutable backups). Compliance mode if you genuinely need it; governance mode if you want flexibility for legitimate ops scenarios.
- A bucket policy denying any non-TLS access (
aws:SecureTransport). Belt and braces against any client that defaults to HTTP. - Replication to a separate account for buckets where data loss is genuinely catastrophic. Cross-account, cross-region, into an account with a different administrative trust boundary.
A secure RDS instance module
RDS has more moving parts than S3, so the module is bigger, but the principle is the same. Encryption on, deletion protection on, logging on, backups configured properly.
variable "kms_key_arn" {
type = string
description = "KMS key for storage encryption."
}
variable "subnet_ids" {
type = list(string)
description = "Private subnet IDs. Must be private. The module will not deploy to public subnets."
}
variable "backup_retention_days" {
type = number
default = 35
validation {
condition = var.backup_retention_days >= 7
error_message = "Backup retention must be at least 7 days."
}
}
resource "aws_db_subnet_group" "this" {
name = "${var.identifier}-subnets"
subnet_ids = var.subnet_ids
}
resource "aws_db_instance" "this" {
identifier = var.identifier
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
max_allocated_storage = var.max_allocated_storage
storage_type = "gp3"
storage_encrypted = true
kms_key_id = var.kms_key_arn
db_subnet_group_name = aws_db_subnet_group.this.name
vpc_security_group_ids = var.security_group_ids
publicly_accessible = false
backup_retention_period = var.backup_retention_days
backup_window = "02:00-03:00"
maintenance_window = "sun:03:30-sun:05:00"
copy_tags_to_snapshot = true
delete_automated_backups = false
deletion_protection = true
skip_final_snapshot = false
final_snapshot_identifier = "${var.identifier}-final-${formatdate("YYYYMMDDhhmm", timestamp())}"
iam_database_authentication_enabled = true
performance_insights_enabled = true
performance_insights_kms_key_id = var.kms_key_arn
performance_insights_retention_period = 7
enabled_cloudwatch_logs_exports = local.log_exports[var.engine]
auto_minor_version_upgrade = true
apply_immediately = false
monitoring_interval = 60
monitoring_role_arn = var.monitoring_role_arn
lifecycle {
ignore_changes = [final_snapshot_identifier]
}
}
locals {
log_exports = {
postgres = ["postgresql", "upgrade"]
mysql = ["audit", "error", "general", "slowquery"]
mariadb = ["audit", "error", "general", "slowquery"]
}
}Code language: PHP (php)
Key decisions:
No public accessibility, ever. publicly_accessible = false is hardcoded, not a variable. If a workload needs database access from outside the VPC, that’s a connectivity pattern problem (Client VPN, Direct Connect, bastion, or PrivateLink), not an RDS configuration problem.
Storage encryption with a customer-managed KMS key is required. Same reasoning as S3 — auditable key policy, visible key use in CloudTrail.
Deletion protection is on by default. With a final snapshot guaranteed via skip_final_snapshot = false. The combination means you can’t accidentally destroy a database and lose the data, even with a Terraform apply gone wrong.
Backups retained for at least 7 days, with 35 as the default. The validation rule prevents teams setting retention to zero “just for dev”. If a database genuinely doesn’t need backups, it probably shouldn’t be RDS.
IAM database authentication enabled. Even if a workload uses traditional username/password initially, having IAM auth available means the migration path away from long-lived database credentials is open. Combined with Secrets Manager rotation for the master credentials, this gets you to a much better place than hand-managed .env files.
CloudWatch Logs exports are engine-aware. Postgres exports different logs than MySQL. The module looks up the right set based on engine type rather than asking the consumer to know.
apply_immediately = false. Maintenance changes apply during the next maintenance window, not on the next Terraform apply. This prevents accidental restarts in business hours when somebody adjusts a parameter group. Teams can override this for genuine emergencies.
What I’d add for production:
- Multi-AZ as the default for any non-dev environment, with a variable to opt out for dev only
- Aurora rather than RDS for new deployments where the engine supports it — better failover, better backup, better scaling
- Enhanced monitoring and Performance Insights into a centralised observability account via CloudWatch cross-account sharing
Wrapping the modules with policy-as-code
Modules with secure defaults are necessary but not sufficient. Engineers can still consume them with bad parameters, or bypass them entirely with a raw resource. Policy-as-code at plan time is what closes the gap.
The two patterns that work well:
Sentinel or OPA policies in your CI pipeline that block plans which create raw aws_s3_bucket or aws_db_instance resources outside the approved modules. Makes the module the only path. Pair with a clear escalation route for the genuine exceptions.
Service Control Policies at the AWS Organisations layer that prevent the worst outcomes regardless of Terraform — denying creation of unencrypted EBS volumes, blocking the disabling of S3 public access blocks at the account level, denying deletion of CloudTrail. SCPs are blunt instruments but they’re enforced even if your IaC tooling fails.
The combination — secure modules, policy gates at plan, organisational guardrails — is what makes secure-by-default actually default.
What this is not
This pattern doesn’t replace threat modelling, doesn’t make data classification go away, and doesn’t substitute for understanding what your workload actually does with the data it stores. A secure S3 bucket containing the wrong data being shared with the wrong account is still a breach. The modules are the hygiene layer; the architectural thinking still has to happen on top.
But getting the hygiene right consistently across a hundred-account estate is worth a lot. Most of the breaches that make the news are hygiene failures, not sophisticated attacks.