How to use prompts from your Chatterfile
All prompts and attributes stored in Chatterfile can be accessed using the get_prompt
method.
This method takes a single argument, id
which is the prompt_id
of the prompt you want to access. You can
get the full prompt configuration by passing the prompt_id
to the get_prompt
method. Note, prompt templates are
converted to jinja2 syntax when they are returned.
It is returned as a dictionary so you can access the prompt’s attributes like so:
Your prompts likely contain some variables that you want to replace with values from your code. This can be
done via the render_prompt
method. This method takes in the prompt_id of the prompt you would like to render
and then a dictionary of variables to replace in the prompt. The dictionary should be in the format of
{'VARIABLE_NAME': 'VARIABLE_VALUE'}
.
Run prompts from Chatterfile using the run_prompt
method. Input the prompt_id
and variables to inject as variables
.
This method will call the LLM based on the configuration in the Chatterfile (though you can override these) and return
the response from the LLM. With simplified mode, you’ll get just the response text.
run_prompt(props)
prompt_id referencing prompt in Chatterfile
Dictionary of variables to replace in the prompt in the form {'VARIABLE_NAME_1': 'VARIABLE_VALUE_1', 'VARIABLE_NAME_2': 'VARIABLE_VALUE_2'}
key_id
to reference api keys in Chatterfile to be used in call. Defaults to first keys chunk declared
in Chatterfile
Simplified mode limits LLM response to just the response text
Override model in Chatterfile
Override model family in Chatterfile
Override params in Chatterfile
Override prompt in Chatterfile
How to use prompts from your Chatterfile
All prompts and attributes stored in Chatterfile can be accessed using the get_prompt
method.
This method takes a single argument, id
which is the prompt_id
of the prompt you want to access. You can
get the full prompt configuration by passing the prompt_id
to the get_prompt
method. Note, prompt templates are
converted to jinja2 syntax when they are returned.
It is returned as a dictionary so you can access the prompt’s attributes like so:
Your prompts likely contain some variables that you want to replace with values from your code. This can be
done via the render_prompt
method. This method takes in the prompt_id of the prompt you would like to render
and then a dictionary of variables to replace in the prompt. The dictionary should be in the format of
{'VARIABLE_NAME': 'VARIABLE_VALUE'}
.
Run prompts from Chatterfile using the run_prompt
method. Input the prompt_id
and variables to inject as variables
.
This method will call the LLM based on the configuration in the Chatterfile (though you can override these) and return
the response from the LLM. With simplified mode, you’ll get just the response text.
run_prompt(props)
prompt_id referencing prompt in Chatterfile
Dictionary of variables to replace in the prompt in the form {'VARIABLE_NAME_1': 'VARIABLE_VALUE_1', 'VARIABLE_NAME_2': 'VARIABLE_VALUE_2'}
key_id
to reference api keys in Chatterfile to be used in call. Defaults to first keys chunk declared
in Chatterfile
Simplified mode limits LLM response to just the response text
Override model in Chatterfile
Override model family in Chatterfile
Override params in Chatterfile
Override prompt in Chatterfile