-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prompt playground #355
base: main
Are you sure you want to change the base?
Prompt playground #355
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some thoughts here -- a few nits on the structure then feedback on the UI
|
||
|
||
@st.cache_data | ||
def instrument(provider: str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can replace this all with init_instruments()
, right?
msg = f"Couldn't instrument {provider}. Try installing `opentelemetry-instrumenation-{provider}" | ||
|
||
if msg: | ||
print(msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger.exception
|
||
|
||
@action(reads=["history"], writes=["history"]) | ||
def generate_answer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure the value of a single-node burr app, I think it might confuse people.
The standard pattern is to break this into two -- one that processes the input, and one that outputs the result of querying the LLM.
We also could have one per model we're evaluating, but that's a bit more complex.
@@ -0,0 +1,303 @@ | |||
import litellm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add README saying this is experimental + a bit of instructions. Also could be a tab on the app?
@@ -0,0 +1,303 @@ | |||
import litellm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this is an integration -- it's more a "tool"? Maybe it should live somewhere else?
Allows to load traces from tracked Burr application with OpenLLMetry traces.
You can load previous chat interactions and try new prompts with multiple LLM providers via LiteLLM.
To launch the app
OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... streamlit run burr/integrations/playground/app.py
Features:
burr-playground
Limitations: