Skip to content

Flaconi/terraform-aws-bedrock-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

terraform-aws-bedrock-agent

Terraform module for Amazon Bedrock Agent resources

lint test Tag Terraform License

Providers

Name Version
aws ~> 6.0
opensearch ~> 2.3
time ~> 0.13

Requirements

Name Version
terraform ~> 1.3
aws ~> 6.0
opensearch ~> 2.3
time ~> 0.13

Required Inputs

The following input variables are required:

Description: Name for the agent.

Type: string

Description: Name for the agent alias.

Type: string

Description: Model identifier for agent.

Type: string

Description: Name for the knowledgebase.

Type: string

Description: Description for the knowledgebase.

Type: string

Description: ARN of S3 bucket with data

Type:

object({
    bucket_arn              = string
    bucket_owner_account_id = optional(string)
    inclusion_prefixes      = optional(set(string))
  })

Description: Name of OpenSearch Serverless Collection.

Type: string

Optional Inputs

The following input variables are optional (have default values):

Description: Description for the agent alias.

Type: string

Default: null

Description: Model identifier for agent.

Type: string

Default: "anthropic.claude-v2"

Description: Model identifier for Knowledgebase. (Deprecated) Use knowledgebase_embedding_model_id instead.

Type: string

Default: null

Description: Embedding Model identifier for Knowledgebase.

Type: string

Default: "amazon.titan-embed-text-v1"

Description: Model identifiers for Knowledgebase to have access to. This permissions would be added in addition to embedded model.

Type: list(string)

Default: []

Description: Data deletion policy for a data source. Valid values: RETAIN, DELETE

Type: string

Default: "RETAIN"

Description: n/a

Type:

object({
    chunking_configuration = object({
      chunking_strategy = string
      fixed_size_chunking_configuration = optional(object({
        max_tokens         = number
        overlap_percentage = optional(number)
      }))
      hierarchical_chunking_configuration = optional(object({
        overlap_tokens = number
        level_1        = object({ max_tokens = number })
        level_2        = object({ max_tokens = number })
      }))
      semantic_chunking_configuration = optional(object({
        breakpoint_percentile_threshold = number
        buffer_size                     = number
        max_token                       = number
      }))
    })
    custom_transformation_configuration = optional(object({
      intermediate_storage    = string
      transformation_function = string
    }))
  })

Default:

{
  "chunking_configuration": {
    "chunking_strategy": "FIXED_SIZE",
    "fixed_size_chunking_configuration": {
      "max_tokens": 300,
      "overlap_percentage": 20
    },
    "hierarchical_chunking_configuration": null,
    "semantic_chunking_configuration": null
  }
}

Description: Additional ARNs of roles to access OpenSearch

Type: list(string)

Default: []

Description: Prompt template for pre-processing.

Type: string

Default: " You are a helpful assistant. Answer the following question using the context provided:\n Question: {question}\n Context: {context}\n Your response should be thoughtful, detailed, and relevant to the provided context.\n"

Description: Parser mode for pre-processing.

Type: string

Default: "DEFAULT"

Description: Prompt creation mode for pre-processing.

Type: string

Default: "OVERRIDDEN"

Description: Prompt state for pre-processing.

Type: string

Default: "ENABLED"

Description: Maximum number of tokens to allow in the generated response.

Type: number

Default: 512

Description: List of stop sequences that will stop generation.

Type: list(string)

Default:

[
  "END"
]

Description: Likelihood of the model selecting higher-probability options while generating a response.

Type: number

Default: 0.7

Description: Number of top most-likely candidates from which the model chooses the next token.

Type: number

Default: 50

Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.

Type: number

Default: 0.9

Description: Prompt template for pre-processing.

Type: string

Default: " You are preparing the input. Extract relevant context and pre-process the following question:\n Question: {question}\n Context: {context}\n Pre-processing should focus on extracting the core information.\n"

Description: Parser mode for pre-processing.

Type: string

Default: "DEFAULT"

Description: Prompt creation mode for pre-processing.

Type: string

Default: "OVERRIDDEN"

Description: Prompt state for pre-processing.

Type: string

Default: "ENABLED"

Description: Maximum number of tokens to allow in the generated response.

Type: number

Default: 512

Description: List of stop sequences that will stop generation.

Type: list(string)

Default:

[
  "END"
]

Description: Likelihood of the model selecting higher-probability options while generating a response.

Type: number

Default: 0.7

Description: Number of top most-likely candidates from which the model chooses the next token.

Type: number

Default: 50

Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.

Type: number

Default: 0.9

Description: Prompt template for orchestration.

Type: string

Default: " You are orchestrating the flow of the agent. Based on the question and context, determine the next steps in the process:\n Question: {question}\n Context: {context}\n Plan the next steps to follow the best strategy.\n"

Description: Parser mode for orchestration.

Type: string

Default: "DEFAULT"

Description: Prompt creation mode for orchestration.

Type: string

Default: "OVERRIDDEN"

Description: Prompt state for orchestration.

Type: string

Default: "ENABLED"

Description: Maximum number of tokens to allow in the generated response.

Type: number

Default: 512

Description: List of stop sequences that will stop generation.

Type: list(string)

Default:

[
  "END"
]

Description: Likelihood of the model selecting higher-probability options while generating a response.

Type: number

Default: 0.7

Description: Number of top most-likely candidates from which the model chooses the next token.

Type: number

Default: 50

Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.

Type: number

Default: 0.9

Description: Prompt template for post-processing.

Type: string

Default: "You are performing post-processing. Review the agent's output and refine the response for clarity and relevance:\nResponse: {response}\nContext: {context}\nEnsure the output is polished and aligns with the context.\n"

Description: Parser mode for post-processing.

Type: string

Default: "DEFAULT"

Description: Prompt creation mode for post-processing.

Type: string

Default: "OVERRIDDEN"

Description: Prompt state for post-processing.

Type: string

Default: "DISABLED"

Description: Maximum number of tokens to allow in the generated response.

Type: number

Default: 512

Description: List of stop sequences that will stop generation.

Type: list(string)

Default:

[
  "END"
]

Description: Likelihood of the model selecting higher-probability options while generating a response.

Type: number

Default: 0.7

Description: Number of top most-likely candidates from which the model chooses the next token.

Type: number

Default: 50

Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.

Type: number

Default: 0.9

Description: Optional ID of an existing Guardrail to use.

Type: string

Default: null

Description: Optional version of the existing Guardrail to use.

Type: string

Default: null

Description: Optional full Guardrail configuration. If set, the module creates a Guardrail and version.

Type:

object({
    description               = optional(string)
    blocked_input_messaging   = optional(string)
    blocked_outputs_messaging = optional(string)

    content_policy_config = optional(object({
      filters_config = list(object({
        type            = string
        input_strength  = string
        output_strength = string
      }))
    }))

    sensitive_information_policy_config = optional(object({
      pii_entities_config = optional(list(object({
        type   = string
        action = string
      })))
      regexes_config = optional(list(object({
        name        = string
        description = string
        pattern     = string
        action      = string
      })))
    }))

    topic_policy_config = optional(object({
      topics_config = list(object({
        name       = string
        examples   = list(string)
        type       = string
        definition = string
      }))
    }))

    word_policy_config = optional(object({
      managed_word_lists_config = optional(list(object({
        type = string
      })))
      words_config = optional(list(object({
        text = string
      })))
    }))
  })

Default: null

Description: A map of tags to assign to the customization job and custom model.

Type: map(string)

Default: {}

Outputs

Name Description
agent Information about created Bedrock Agent
agent_alias Information about created Bedrock Agent Alias
guardrail_id ID of the created Guardrail
guardrail_version Version of the created Guardrail
knowledge_base Information about created Bedrock Knowledgebase
oss_collection Information about created OpenSearch Serverless collection

License

MIT License

Copyright (c) 2024 Flaconi GmbH

About

Terraform module for Amazon Bedrock Agent resources

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 5