FeaturesLLMs
LLMs (Large Language Models) are the brains behind the Agents in AgentForge. These models are responsible for reasoning, invoking tools, and generating content. Currently, AgentForge supports two powerful platforms for LLMs: OpenAI and Groq. In addition you can easly integrate any of the Langchain supported LLMs as described here.
Supported Platforms
OpenAI
- API Key:
OPENAI_API_KEY
- Default Model:
gpt-4o
Groq
- API Key:
GROQ_API_KEY
- Default Model:
llama3-70b-8192
Configuration
When setting up your Agents, you can specify the API key for one or both of these platforms. Here’s how the configuration works:
- Single Platform: If you specify an API key for either OpenAI or Groq, the respective platform will be initialized and used for all Agent processing.
- Dual Platforms: If you provide API keys for both OpenAI and Groq, OpenAI will take precedence and be used by default. You can also mix them and initialze a custom LLM for specific agent. For example update the agent initialization code to also create an instance of LLM
1const llm = new ChatOpenAI({
2 model: process.env.OPENAI_MODEL_NAME || 'gpt-4o',
3 temperature: 0,
4});
5
6const name = 'Presenter';
7export const createPresenter = async () => {
8 const presenterAgent = await createAgent(
9 llm,
10...
Example Configuration
1# Use OpenAI
2OPENAI_API_KEY=your_openai_api_key
3
4# Use Groq
5GROQ_API_KEY=your_groq_api_key
Model Selection
You can also configure which model to use on the different platforms by specifying the
GROQ_MODEL_NAME
or OPENAI_MODEL_NAME
environment variables.- Groq Model: Set
GROQ_MODEL_NAME
to your desired model name. The default model isllama3-70b-8192
.
- OpenAI Model: Set
OPENAI_MODEL_NAME
to your desired model name. The default model isgpt-4o
.
Conclusion
LLMs are central to the functionality of Agents within AgentForge, enabling advanced reasoning, tool invocation, and content generation. By supporting both OpenAI and Groq platforms, AgentForge offers flexibility and power, allowing you to choose the best model and platform for your needs. Configure your API keys and model names to get started with your preferred setup, and leverage the full potential of LLMs in your AI applications.