Article
OpenAI Codex on Amazon Bedrock: What Works Today, AWS Setup & Guardrails
The OpenAI and Amazon partnership makes Bedrock a serious path for enterprise agents, but it does not mean every OpenAI runtime or Codex-class model is generally available in Bedrock today. The important move for platform teams is to separate what works now from what is announced, private-preview-gated, or still needs to be validated in your AWS account.
The April 27 Microsoft/OpenAI amendment reduces a major blocker: OpenAI can now serve products across any cloud provider, and Microsoft's license to OpenAI models and products is now non-exclusive through 2032.
Last verified: April 27, 2026. OpenAI's April 27 amended agreement with Microsoft materially strengthens the AWS path: OpenAI says it can now serve all its products to customers across any cloud provider, while Microsoft remains the primary cloud partner and OpenAI products still ship first on Azure unless Microsoft cannot and chooses not to support the needed capabilities. That does not mean every OpenAI model, Codex runtime, Frontier capability, or Stateful Runtime is generally available in every Bedrock account today. It does mean AWS platform teams should treat OpenAI-on-Bedrock readiness as time-sensitive rather than speculative.
This guide walks through the AWS foundation: account design, Bedrock access, model IDs, IAM Identity Center, Codex configuration, budgets, and rollout checks. Use it to validate today's OpenAI gpt-oss path and keep the account ready for future OpenAI access on AWS.
What works today and what is not yet generally available
| Capability | Updated status on Apr 27, 2026 | What to do now |
|---|---|---|
| OpenAI GPT OSS on Bedrock | Available today as OpenAI open-weight Bedrock models, including documented Runtime model IDs openai.gpt-oss-20b-1:0 and openai.gpt-oss-120b-1:0. | Use it to validate IAM, endpoints, quotas, budgets, CloudTrail, and developer workflow. |
| Codex CLI with Bedrock | Available through Codex CLI's Bedrock provider; the official Codex changelog shows the provider added in 0.123.0, and 0.124.0 added first-class Bedrock support for OpenAI-compatible providers, including AWS SigV4 signing and AWS credential-based auth. | Use the latest Codex CLI; for this guide's AWS profile/SigV4 path, use 0.124.0 or later unless intentionally testing 0.123.0. |
| Stateful Runtime Environment in Bedrock | OpenAI and Amazon say it will be available through Bedrock and is expected to launch in the next few months. | Prepare identity, network, audit, cost, and approval paths now; do not hard-code future runtime IDs. |
| OpenAI Frontier on AWS | OpenAI and Amazon say AWS will be the exclusive third-party cloud distribution provider for OpenAI Frontier. | Treat it as a likely enterprise roadmap item, not a provisionable Bedrock model until AWS or OpenAI expose it in your account. |
| GPT-5.5 / GPT-5.5 Pro / GPT-5.3-Codex in Bedrock | OpenAI released GPT-5.5 on April 23, 2026, and OpenAI API docs now list gpt-5.5. The Microsoft/OpenAI amendment makes cross-cloud serving commercially clearer, but AWS docs still need to be checked for actual Bedrock availability. | Keep the warning: validate with official AWS docs and list-foundation-models before promising production use. |
What changed in the Microsoft/OpenAI agreement?
On April 27, 2026, OpenAI and Microsoft amended their partnership. Microsoft remains OpenAI's primary cloud partner, and OpenAI products will still ship first on Azure unless Microsoft cannot and chooses not to support the necessary capabilities. The important change for AWS customers is that OpenAI can now serve all its products across any cloud provider, and Microsoft's OpenAI IP license is now non-exclusive.
For AWS teams, the Amazon/OpenAI roadmap now lines up better with the Microsoft/OpenAI agreement. That makes AWS readiness worth doing now, but it does not remove the account-level checks: model IDs, regions, quotas, endpoints, IAM, logging, and cost controls.
Do not overstate the update. The amendment reduces a commercial constraint, but it is not a Bedrock GA notice for every OpenAI product. Prepare the AWS foundation now; do not promise production use until the model, endpoint, region, quota, and runtime are visible in your account.
Should your team start now or wait?
| Situation | Recommendation | Why |
|---|---|---|
| You want Stateful Runtime or Frontier directly | Start the platform work, not the production commitment. | Prepare the AWS account, IAM, networking, audit, budget, and approval model now. Hold production commitments until AWS/OpenAI access is visible in your account. |
| You want to validate Codex through Bedrock with OpenAI models available today | Start now in an isolated account or sandbox OU. | GPT OSS 120B lets you test credentials, endpoints, permissions, observability, and spend limits. |
| You support US/Canadian customers or have residency, audit, or private networking requirements | Run a readiness review before the pilot. | The risk is less about the config file and more about identity, regions, logs, traffic path, data residency, and usage containment. |
| You plan to release this to multiple developers | Add guardrails before rollout. | Coding agents can create expensive loops, repeated calls, and usage that is hard to attribute after the fact. |
AWS account strategy for Codex and agent pilots
Do not start in production just because it is convenient. The AWS account boundary is one of the simplest ways to control billing, permissions, CloudTrail, Service Control Policies, and blast radius.
| Model | When to use it | Minimum controls |
|---|---|---|
| New AWS account | Startup, technical lab, or validation without an existing AWS environment. | Corporate email or secure list for root, MFA, alternate contacts, IAM Identity Center, and a monthly budget before the first test. |
| AWS Organizations member account | Company with Organizations, Control Tower, or a landing zone. | Sandbox or AI platform OU, regional/model SCPs, centralized CloudTrail, account budget, and separate permission sets. |
| AI platform account | Teams operating Bedrock for multiple applications. | Projects by application, cost tags, clear owners, IAM review, and a promotion path to production. |
| Existing production account | Only after the path has been validated. | Change management, endpoint policy, CloudTrail, service budget, rollback tests, and compliance review. |
Region and data residency for US and Canadian teams
For US and Canadian teams, treat us-east-1, us-east-2, and us-west-2 as the primary validation path for OpenAI GPT OSS on Bedrock today. Those regions appear in AWS model documentation for openai.gpt-oss-120b-1:0 and are the most practical path for testing the current Codex provider, IAM permissions, the Mantle endpoint, Budgets, CloudTrail, and PrivateLink.
For Canadian organizations, latency is secondary to data residency and policy approval. Document whether the pilot may invoke models in a US region while usage remains bounded by account, IAM, logs, and data policy. If Canadian data residency is mandatory, document that restriction and keep sensitive data out of the pilot until the required model, endpoint, quota, and network path are available and approved in the required Canadian region. For Brazil or LATAM deployments, the same logic applies to sa-east-1: validate model availability, endpoint support, quota, PrivateLink, and LGPD requirements before moving sensitive data into the pilot.
Bedrock Mantle vs. Bedrock Runtime: endpoint and model ID guide
The most common implementation error is mixing the Runtime model ID with the Mantle model ID. AWS documents both paths for GPT OSS 120B:
| Path | Best fit | Endpoint | Model ID |
|---|---|---|---|
| Bedrock Mantle | Codex, Responses API, Chat Completions, and OpenAI-compatible clients. | https://bedrock-mantle.{region}.api.aws/v1 | openai.gpt-oss-120b |
| Bedrock Runtime | AWS SDK, InvokeModel, InvokeModelWithResponseStream, and Converse. | https://bedrock-runtime.{region}.amazonaws.com | openai.gpt-oss-120b-1:0 |
| Future Stateful Runtime or Frontier | Production agents that need state, tools, approvals, identity, and governance. | TBD | TBD; verify in your account when access opens. |
IAM, SSO, and least-privilege access
Start with IAM Identity Center. Each person should use temporary credentials refreshed through SSO, not permanent access keys on laptops. Create one administrative permission set for bootstrap and another restricted permission set for Codex/Bedrock users.
aws configure sso --profile codex-bedrock
aws sso login --profile codex-bedrock
aws sts get-caller-identity --profile codex-bedrockFor local validation through Runtime, a restrictive starting point is to allow global model discovery and invocation only for approved models:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "bedrock:ListFoundationModels",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": [
"arn:aws:bedrock:*::foundation-model/openai.gpt-oss-120b-1:0",
"arn:aws:bedrock:*::foundation-model/openai.gpt-oss-20b-1:0"
]
}
]
}The policy above validates the Bedrock Runtime path only. Codex's amazon-bedrock provider uses the OpenAI-compatible Bedrock Mantle path, so the SSO role also needs Mantle permissions. For a pilot, the AWS managed policy AmazonBedrockMantleInferenceAccess is the fastest starting point. For production, scope Mantle access to specific Project ARNs once your project structure is known.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BedrockMantleProjectInference",
"Effect": "Allow",
"Action": [
"bedrock-mantle:Get*",
"bedrock-mantle:List*",
"bedrock-mantle:CreateInference"
],
"Resource": "arn:aws:bedrock-mantle:us-east-1:123456789012:project/*"
}
]
}If you use Bedrock API keys or bearer-token authentication instead of AWS credential-based auth, also review bedrock-mantle:CallWithBearerToken.
If your organization uses SCPs, explicitly validate that Bedrock is not blocked in the selected regions. Use explicit denies for regions and models outside policy, but avoid a policy so broad that every future model is automatically usable.
Configure Codex with the built-in amazon-bedrock provider
Use the latest Codex CLI. Version 0.123.0 introduced the built-in amazon-bedrock provider with configurable AWS profile support; version 0.124.0 added first-class Bedrock support for OpenAI-compatible providers, including AWS SigV4 signing and AWS credential-based auth. For this guide's AWS profile/SigV4 path, use 0.124.0 or later unless you are intentionally testing 0.123.0.
npm install -g @openai/codex@latest
codex --versionIn ~/.codex/config.toml, use the Mantle model ID, not the Runtime model ID:
model = "openai.gpt-oss-120b"
model_provider = "amazon-bedrock"
[model_providers.amazon-bedrock.aws]
profile = "codex-bedrock"The provider handles the Mantle endpoint, Responses API, and AWS SigV4 authentication. Keep region, model, and runtime in configuration rather than hard-coding them, because those values may change as AWS adds support for Stateful Runtime, Frontier, or Codex models.
Validate before developer rollout
Do not call the pilot ready until you pass a simple end-to-end check:
aws bedrock list-foundation-models \
--profile codex-bedrock \
--region us-east-1 \
--query "modelSummaries[?providerName=='OpenAI'].[modelId,modelName,modelLifecycle.status]" \
--output table- Confirm the model appears in the selected region.
- Confirm the SSO profile expires and refreshes correctly.
- Run a short Codex prompt against a small repository before pointing it at monorepos.
- Check CloudTrail, budgets, and alarms after the test.
- Document Codex version, AWS CLI version, region, endpoint, model, and validation date.
Cost controls, Projects, and usage attribution
Install cost controls before the first pilot. In AWS Budgets, create a monthly account budget with alerts at 50%, 80%, and 100% actual spend, plus 100% forecasted spend. Send alerts to a FinOps or platform list, not to a single person.
| Control | When to use it | Note |
|---|---|---|
| Account budget | Every pilot. | Protects against loops and unexpected usage in an isolated account. |
| Amazon Bedrock service-filtered budget | Every Bedrock pilot. | Separates AI consumption from the rest of AWS spend. |
| Cost Anomaly Detection | Teams with recurring usage. | Flags unusual patterns before month-end. |
| Projects API | Applications using OpenAI-compatible APIs on Bedrock Mantle. | AWS positions Projects as application-level isolation with tags, cost tracking, and observability. |
| Separate accounts or tags | Multi-team environments. | Prevents experiments, development, and production from blending together. |
Security, audit, and private networking
Pointing Codex at Bedrock is the easy part. The security review needs to cover who can invoke models, which regions are allowed, what data enters prompts, how logs are retained, how spend is limited, and which network path is used.
- CloudTrail: enable centralized trails and review Bedrock events from day one.
- SCPs: block regions or models outside company policy, with controlled exceptions for the pilot account.
- PrivateLink: AWS documents interface endpoints for
bedrock,bedrock-runtime, andbedrock-mantle, including private DNS forbedrock-mantle.{region}.api.aws. - Endpoint policies: apply endpoint policies when traffic must stay restricted to approved actions and resources.
- Sensitive data: start with low-risk repositories and tasks before allowing personal, customer, or production data.
Troubleshooting common setup failures
| Symptom | Likely cause | Fix |
|---|---|---|
model not found | Runtime ID used on Mantle, or Mantle ID used on Runtime. | Use openai.gpt-oss-120b in Codex/Mantle and openai.gpt-oss-120b-1:0 in Runtime. |
AccessDeniedException | Permission set, SCP, or endpoint policy blocking Bedrock. | Review IAM, SCPs, region, and model ARN. |
| Codex ignores Bedrock | Old Codex version or wrong model_provider. | Upgrade to Codex CLI 0.124.0 or later for the AWS profile/SigV4 path and confirm model_provider = "amazon-bedrock". |
| SSO works in the terminal but fails in Codex | Different profile or expired session. | Run aws sso login --profile codex-bedrock and confirm the same profile in config.toml. |
Works in us-east-1, fails elsewhere | Model, endpoint, quota, or PrivateLink path not validated in that region. | Run list-foundation-models in the target region and verify quotas and endpoints. |
| Cost rises with no clear owner | No tags, Projects, isolated account, or service budget. | Add a Bedrock budget, Cost Anomaly Detection, and attribution by account, tag, or Project. |
Readiness checklist
- New AWS account or member account created for the pilot.
- Root MFA, alternate contacts, and IAM Identity Center configured.
- Region decision documented for
us-east-1,us-east-2,us-west-2, and Canadian residency requirements. - OpenAI models listed with
list-foundation-models. - Correct Mantle and Runtime model IDs documented.
- Codex CLI
0.124.0or later installed for the AWS profile/SigV4 path. - Local SSO profile working with temporary credentials.
- Account budget, Bedrock budget, and shared alerts enabled.
- CloudTrail, SCPs, PrivateLink, and endpoint policies reviewed for pilot risk.
- Future switch plan documented for GPT-5.5, GPT-5.5 Pro, Codex, Frontier, or Stateful Runtime when AWS opens access.
FAQ
Does the April 27 Microsoft/OpenAI agreement mean OpenAI models are now broadly available on AWS?
It means the AWS path is stronger. OpenAI and Microsoft now say OpenAI can serve all its products across any cloud provider, while Microsoft remains OpenAI's primary cloud partner and products ship first on Azure unless Microsoft cannot and chooses not to support the necessary capabilities. For AWS customers, that reduces commercial ambiguity around the Amazon/OpenAI roadmap. It does not replace the need to verify which OpenAI models, runtimes, regions, quotas, and endpoints are available in your own AWS account.
Is GPT-5.5, GPT-5.5 Pro, or GPT-5.3-Codex available as a public Bedrock model today?
Do not assume so. OpenAI released GPT-5.5 and OpenAI API docs list gpt-5.5, but that is not the same as Bedrock availability. Treat GPT-5.5, GPT-5.5 Pro, GPT-5.3-Codex, OpenAI Frontier, and Stateful Runtime as separate availability questions. Validate what your account can list and invoke today before promising an architecture.
Is Codex itself running inside Bedrock?
In the current path, Codex uses the amazon-bedrock provider to call an OpenAI model available through Bedrock Mantle. That is not the same as saying the future Codex/Frontier runtime is already provisionable as a Bedrock resource.
How should US and Canadian teams choose a region?
Start with us-east-1, us-east-2, or us-west-2 when the goal is to validate OpenAI GPT OSS and the current Codex provider quickly. For Canadian companies, document whether calls to US regions are allowed; if Canadian residency is mandatory, keep sensitive data out of the pilot until the required model, endpoint, quota, and network path are approved in the required Canadian region.
Do we need PrivateLink?
Not every pilot needs it, but regulated environments should evaluate PrivateLink and endpoint policies early. AWS documents endpoints for bedrock-mantle, bedrock-runtime, and Bedrock control-plane actions.
Do Projects replace separate AWS accounts?
Not completely. Projects help isolate applications inside one account when using OpenAI-compatible APIs on Mantle. AWS accounts remain stronger boundaries for billing, ownership, and governance.
How Elevata helps
This work takes more than a config.toml file. Elevata helps teams validate AWS readiness for Codex and OpenAI agents: account and OU design, IAM/SCP, Bedrock, Mantle, PrivateLink, CloudTrail, budgets, Projects, observability, and developer rollout.
If your team wants to prepare AWS before agent usage spreads, review your AWS readiness with Elevata.
Related
Continue reading
Related reading on this topic.

3/31/2026
14 min read
Claude Code on Bedrock: Model Deployment & AWS Setup Guide
Continue reading
9/15/2025
2 min read
Overcoming Cloud Environment Challenges: A Guide for Lean Teams
Continue reading
9/15/2025
3 min read
IT Environment Assessment: Why Your Digital Strategy Needs a Clear Starting Point
Continue reading
9/15/2025
3 min read

