Skip to main content
Learn how to select and configure language models for your AI agents using the Hamsa web interface.

What You’ll Learn

  • Selecting the right LLM provider and model
  • Adjusting temperature settings
  • Setting max tokens for responses
  • Configuring advanced parameters (Top P, penalties)
  • Testing different configurations

Getting Started

Navigate to your agent’s configuration and find the LLM Settings or Model Configuration section.
For programmatic model configuration and dynamic selection, see the API Integration Guide.

Next Steps

  • Choose the optimal model for your use case
  • Test different temperature settings
  • Monitor token usage and costs