This guide providers more details about creating and setting AI api keys in your .env file for using Xyne’s chat feature.

To use AI chat, you need to configure access credentials. You can use one of the following providers: AWS, OpenAI, Ollama, Together AI or Fireworks AI.

AWS (Amazon Bedrock)

  • Sign in to the AWS Management Console
  • Generate your Access Key and Secret Key.
  • Add them as environment variables in your .env.default file:
    .env.default
    AWS_ACCESS_KEY=<YOUR_AWS_ACCESS_KEY> 
    AWS_SECRET_KEY=<YOUR_AWS_SECRET_KEY>  
    AWS_REGION=<YOUR_AWS_REGION>     
    
    The AWS_REGION key is optional in the .env.default file; if not provided, it defaults to us-west-2

OpenAI

  • Create an API key from the OpenAI API settings.
  • Add it as environment variables in your .env.default file:
    .env.default
    OPENAI_API_KEY=<YOUR_OPENAI_API_KEY> 
    

Ollama (Local LLM)

  • Install Ollama and the desired model from Ollama.
  • Add the model name in your .env.default file:
    .env.default
    OLLAMA_MODEL=<YOUR_OLLAMA_MODEL_NAME> 
    
If you’re using Ollama, ensure the specified model is downloaded locally.

Together AI

  • Create your API key in Together AI
  • Select the model from your choice.
  • Add the following to your .env.default file:
    .env.default
    TOGETHER_API_KEY=<YOUR_TOGETHER_API_KEY>
    TOGETHER_MODEL=<YOUR_TOGETHER_MODEL>
    TOGETHER_FAST_MODEL=<YOUR_TOGETHER_FAST_MODEL>
    

Fireworks AI

  • Create your API key in Fireworks AI
  • Select the model from your choice.
  • Add the following to your .env.default file:
    .env.default
    FIREWORKS_API_KEY=<YOUR_FIREWORKS_API_KEY>
    FIREWORKS_MODEL=<YOUR_FIREWORKS_MODEL>
    FIREWORKS_FAST_MODEL=<YOUR_FIREWORKS_FAST_MODEL>
    

Setting up for Reasoning [Together | Fireworks | Ollama] :

For addition of only reasoning in your model, you need to add REASONING in your .env.default file:

.env.default
REASONING=true

For example :

Reasoning with Together AI :

.env.default
TOGETHER_API_KEY=<YOUR_TOGETHER_API_KEY>
TOGETHER_MODEL=deepseek-ai/DeepSeek-R1
TOGETHER_FAST_MODEL=deepseek-ai/DeepSeek-V3
REASONING=true
FAST_MODEL_REASONING=false

Reasoning with Fireworks AI :

.env.default
FIREWORKS_API_KEY=<YOUR_FIREWORKS_API_KEY>
FIREWORKS_MODEL=accounts/fireworks/models/deepseek-r1
FIREWORKS_FAST_MODEL=accounts/fireworks/models/deepseek-v3
REASONING=true
FAST_MODEL_REASONING=false

Reasoning with Ollama :

.env.default
OLLAMA_MODEL=<REASONING_MODEL>
OLLAMA_FAST_MODEL=non reasoning model
REASONING=true
FAST_MODEL_REASONING=false

Key Selection Order :

If multiple keys are set in the environment variable, our system follows this precedence:

AWSOpenAIOllamaTogether AIFireworks AI
Only anyone provider is required to get started. This means either AWS or OpenAI, Ollama, Together AI or Fireworks AI can be used, all of them aren’t required.