[!TIP] ⭐️ Star this repo to get release notifications for new features.
npx palico init <project-name>
Checkout our quickstart guide.
https://github.com/palico-ai/palico-main/assets/32821894/54f35583-41c1-48a3-9565-95c484a4909b
With Palico, you have complete control over the implementation details of your LLM application. Building an LLM application with Palico just involves implementing the Agent
interface. Here's an example:
import {
Agent,
AgentResponse,
ConversationContext,
ConversationRequestContent,
} from "@palico-ai/app";
class MyLLMApp implements Agent {
async chat(
content: ConversationRequestContent,
context: ConversationContext
): Promise<AgentResponse> {
// Your LLM application logic
// 1. Pre-processing
// 2. Build your prompt
// 3. Call your LLM model
// 4. Post-processing
return {
// 5. Return a response to caller
}
}
}
Learn more about building your application with palico (docs).
Since you own the implementation details, you can use Palico with most other external tools and libraries
Tools or Libraries | Supported | |
---|---|---|
Langchain | ✅ | |
LlamaIndex | ✅ | |
Portkey | ✅ | |
OpenAI | ✅ | |
Anthropic | ✅ | |
Cohere | ✅ | |
Azure | ✅ | |
AWS Bedrock | ✅ | |
GCP Vertex | ✅ | |
Pinecone | ✅ | |
PG Vector | ✅ | |
Chroma | ✅ |
Learn more from docs.
Make a code change and instantly preview it locally on our playground UI
https://github.com/user-attachments/assets/c33ae53d-acf5-4c89-9c41-743ea1cb4722
Working on LLM application involves testing different variations of models, prompts, and application logic. Palico helps you build an interchangeable application layer using "feature-flag-like" feature called AppConfig. Using AppConfig, you can easily swap models, prompts, or any logic in your application layer.
Learn more about AppConfig.
Palico helps you create an iterative loop to systematically improve performance of your LLM application using experiments.
With experiments, you can:
Learn more about experiments
You can deploy your Palico app to any cloud provider using Docker or use our managed hosting (coming soon). You can then use our ClientSDK or REST API to communicate with your LLM application.
Learn more from docs.
The easiest way to contribute is to pick an issue with the good first issue
tag 💪. Read the contribution guidelines here.