This guide providers more details about creating and setting AI api keys in your .env file for using Xyne’s chat feature.

To use AI chat, you need to configure access credentials. You can use one of the following providers: AWS, OpenAI, Ollama, Gemini, Together AI or Fireworks AI.

AWS (Amazon Bedrock)

  • Sign in to the AWS Management Console
  • Generate your Access Key and Secret Key.
  • Add them as environment variables in your .env.default file:
    .env.default
    AWS_ACCESS_KEY=<YOUR_AWS_ACCESS_KEY> 
    AWS_SECRET_KEY=<YOUR_AWS_SECRET_KEY>  
    AWS_REGION=<YOUR_AWS_REGION>     
    
    The AWS_REGION key is optional in the .env.default file; if not provided, it defaults to us-west-2

OpenAI

  • Create an API key from the OpenAI API settings.
  • Add it as environment variables in your .env.default file:
    .env.default
    OPENAI_API_KEY=<YOUR_OPENAI_API_KEY> 
    

Ollama (Local LLM)

  • Install Ollama and the desired model from Ollama.
  • Add the model name in your .env.default file:
    .env.default
    OLLAMA_MODEL=<YOUR_OLLAMA_MODEL_NAME> 
    
If you’re using Ollama, ensure the specified model is downloaded locally.
Also ensure that your downloaded ollama model is working in the host machine, you can use ollama run your_model_name your_message. This should return an output confirming ollama is working.
For quickstart mode ensure Ollama is listening from all addresses, so that docker is able to reach it.

Google AI

  • Generate your API key from Google AI Studio
  • Add them as environment variables in your .env.default file:
    .env.default
    GEMINI_API_KEY=<YOUR_GEMINI_API_KEY> 
    GEMINI_MODEL=<YOUR_GEMINI_MODEL_NAME>  
    

Together AI

  • Create your API key in Together AI
  • Select the model from your choice.
  • Add the following to your .env.default file:
    .env.default
    TOGETHER_API_KEY=<YOUR_TOGETHER_API_KEY>
    TOGETHER_MODEL=<YOUR_TOGETHER_MODEL>
    TOGETHER_FAST_MODEL=<YOUR_TOGETHER_FAST_MODEL>
    

Fireworks AI

  • Create your API key in Fireworks AI
  • Select the model from your choice.
  • Add the following to your .env.default file:
    .env.default
    FIREWORKS_API_KEY=<YOUR_FIREWORKS_API_KEY>
    FIREWORKS_MODEL=<YOUR_FIREWORKS_MODEL>
    FIREWORKS_FAST_MODEL=<YOUR_FIREWORKS_FAST_MODEL>
    

Setting up for Reasoning [Together | Fireworks | Ollama] :

For addition of only reasoning in your model, you need to add REASONING in your .env.default file:

.env.default
REASONING=true

For example :

Reasoning with Together AI :

.env.default
TOGETHER_API_KEY=<YOUR_TOGETHER_API_KEY>
TOGETHER_MODEL=deepseek-ai/DeepSeek-R1
TOGETHER_FAST_MODEL=deepseek-ai/DeepSeek-V3
REASONING=true
FAST_MODEL_REASONING=false

Reasoning with Fireworks AI :

.env.default
FIREWORKS_API_KEY=<YOUR_FIREWORKS_API_KEY>
FIREWORKS_MODEL=accounts/fireworks/models/deepseek-r1
FIREWORKS_FAST_MODEL=accounts/fireworks/models/deepseek-v3
REASONING=true
FAST_MODEL_REASONING=false

Reasoning with Ollama :

.env.default
OLLAMA_MODEL=<REASONING_MODEL>
OLLAMA_FAST_MODEL=non reasoning model
REASONING=true
FAST_MODEL_REASONING=false

Additional BASE URL support

We’ve also added support for adding base url values for OpenAI and Together AI, along with your Open AI or Together API Key, you can the base url value:

OPENAI_API_KEY=<YOUR_OPEN_API_KEY>  # If you want to use OpenAI
    ### OR
TOGETHER_API_KEY=<YOUR_TOGETHER_API_KEY>  # If you want to use Together AI
BASE_URL=<YOUR_BASE_URL>

Key Selection Order :

If multiple keys are set in the environment variable, our system follows this precedence:

AWSOpenAIOllamaTogether AIFireworks AIGemini
Only a single provider is required to get started. This means either AWS or OpenAI, Ollama, Gemini, Together AI or Fireworks AI can be used, all of them aren’t required.