Build, evaluate and optimize your LLM pipelines to increase accuracy and reduce cost. Deployed on your own infra.
pip install superpipe-py
Github
Build multistep pipelines and experiment with parameters like models, prompts, and number of RAG results.
from superpipe import grid_search
params_grid = {
short_description_step.name: {
'model': [models.gpt35, models.gpt4],
},
embedding_search_step.name: {
'k': [3, 5, 7],
},
categorize_step.name: {
'prompt': [simple_prompt, advanced_prompt],
},
}
search_embeddings = grid_search.GridSearch(categorizer, params_grid)
search_embeddings.run(df)
Dataset management
Build golden sets with easy ground-truth labeling tools
Experimentation
Compare experiments across cost, speed, and accuracy
Observability
Observe pipelines and deep dive into logs
Who built Superpipe?
Superpipe was developed by Village Computing Company (formerly Stelo Labs) as an internal project to aid in our pipeline development. We thought it could be useful for other people so we polished the UI and launched Studio.
Can I deploy Superpipe in my own environement?
Yes, absolutely. The Superpipe SDK is completely open-source and Superpipe Studio can be deployed inside your cloud for complete privacy and security.
Is there a hosted version of Superpipe Studio?
No. It's easy to host Superpipe Studio yourself but we don't offer a hosted version since it's not our main focus.
How do I build a pipeline?
You can visit our docs at docs.superpipe.ai to see example pipelines.
What is the benefit of Superpipe?
Superpipe is an experimentation platform for your LLM pipelines so that you can reduce cost and increase accuracy.
Does Superpipe replace Langchain or Llama Index?
Superpipe works alongside popular libraries. Visit our docs.superpipe.ai to learn more.
© Stelo Labs, Inc. 2024