-
Notifications
You must be signed in to change notification settings - Fork 218
Remote environment: Support infrastructure files for Terraform. #4973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Azure Dev CLI Install InstructionsInstall scriptsMacOS/Linux
bash:
pwsh:
WindowsPowerShell install
MSI install
Standalone Binary
MSI
Documentationlearn.microsoft.com documentationtitle: Azure Developer CLI reference
|
filesToUpload := sbd.azdContext.GetEnvironmentInfraFiles(env.name) | ||
|
||
for _, file := range filesToUpload { | ||
fileName := filepath.Base(file) | ||
fileBuffer, err := os.Open(file) | ||
if err != nil { | ||
return fmt.Errorf("failed to open file for upload: %w", err) | ||
} | ||
defer fileBuffer.Close() | ||
err = sbd.blobClient.Upload(ctx, fmt.Sprintf("%s/%s/%s", env.name, InfraFilesDir, fileName), fileBuffer) | ||
if err != nil { | ||
return fmt.Errorf("uploading infra file: %w", err) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HadwaAbdelhalem Instead of being selective here - what if we just upload any files that are in the environment folder? That would solve this specific use case as well as any other similar issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wbreza I wanted to be more specific in case users have local tmp, or test files in the local environment folder that was not meant to be persisted and to follow the pattern of the environment items being well structured and defined ( configfile, env file) by adding infrafiles
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would happen with terraform.tfstate.lock.info
?
Synchronizing a local tfstate file doesn't seem to account for concurrency, locking, and conflict resolution which might leave customers in a much worse state than just using Terraform remote state. It exposes customers to inconsistencies and potential data loss when a conflict occurs. I have not seen manually synchronizing a local state file be recommended over just using terraform remote state. I believe we should be recommending best practices to our customers using azd
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the better approach here is to just recommend that people use Terraform Remote State when they're using azd Remote Environments (perhaps with a warning when they are using local state files and azd remote env at the same time). If we want to make it automagic then azd could dynamically generate a backend config for Terraform (pointing to the same Azure Storage container, perhaps under the hood using the environment’s storage account) and run terraform init with it. There also seems to be some overlap with terraform workspaces which could probably be better integrated here as well.
Even if we continue to implement this PRs approach, I think using terraform remote state should be a strong recommendation in the docs (over synchronizing local files) which we could also do a docs PR for.
} | ||
|
||
for _, item := range items { | ||
if strings.Contains(item.Path, sbd.InfraPath(env)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using Contains is fragile – if one environment’s name is a substring of another, this strings.Contains check can incorrectly match blobs from a different environment. For example, an environment named "prod" would match blobs for "prod2"
filesToUpload := sbd.azdContext.GetEnvironmentInfraFiles(env.name) | ||
|
||
for _, file := range filesToUpload { | ||
fileName := filepath.Base(file) | ||
fileBuffer, err := os.Open(file) | ||
if err != nil { | ||
return fmt.Errorf("failed to open file for upload: %w", err) | ||
} | ||
defer fileBuffer.Close() | ||
err = sbd.blobClient.Upload(ctx, fmt.Sprintf("%s/%s/%s", env.name, InfraFilesDir, fileName), fileBuffer) | ||
if err != nil { | ||
return fmt.Errorf("uploading infra file: %w", err) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would happen with terraform.tfstate.lock.info
?
Synchronizing a local tfstate file doesn't seem to account for concurrency, locking, and conflict resolution which might leave customers in a much worse state than just using Terraform remote state. It exposes customers to inconsistencies and potential data loss when a conflict occurs. I have not seen manually synchronizing a local state file be recommended over just using terraform remote state. I believe we should be recommending best practices to our customers using azd
.
// Upload Infra Files if any | ||
filesToUpload := sbd.azdContext.GetEnvironmentInfraFiles(env.name) | ||
|
||
for _, file := range filesToUpload { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if some uploads succeed and some fail? This would leave an environment in an inconsistent state. Is there a way to make the upload more atomic? Maybe upload a zip of all the files instead of files individually?
@@ -143,6 +153,22 @@ func (sbd *StorageBlobDataStore) Save(ctx context.Context, env *Environment, opt | |||
return fmt.Errorf("uploading .env: %w", describeError(err)) | |||
} | |||
|
|||
// Upload Infra Files if any | |||
filesToUpload := sbd.azdContext.GetEnvironmentInfraFiles(env.name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can files be deleted from remote env? how would I remove an individual file and have it removed from storage and other machines using this env?
return fmt.Errorf("failed to open file for upload: %w", err) | ||
} | ||
defer fileBuffer.Close() | ||
err = sbd.blobClient.Upload(ctx, fmt.Sprintf("%s/%s/%s", env.name, InfraFilesDir, fileName), fileBuffer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How are secrets protected in this approach? Seems like a user could upload a lot of sensitive information to a storage account that I don't see any documentation on how to secure correctly. There's no mention of turning on blob encryption, turning off public access, restricting network access, or any other best practices for putting tfstate in a storage account https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/remote-environments-support
If I'm working across two computers, how is the other computer notified of the change so I have the latest state? Do I have to manually re-run azd env refresh
?
Using Terraform as IaC deployment tool, there are env specific additional files that gets created inside the .azure/ENVNAME folder like state files, and plan files used for the deployment folder.
This PR updates the remote environment feature to manage the infrastructure files for terraform deployments in remote data store (azure blob) when azd is configured to use remote environment offering consistent deployment details.
Changes in behavior
azd provision // now stores the additional .azure/envName/infra directory files in the remote environment storage blob after the deployment completes
fixes #4756
closes #4756