Skip to content

Conversation

@clutchski
Copy link
Contributor

@clutchski clutchski commented Jan 9, 2026

Addresses #70

@clutchski clutchski requested a review from cpinn January 9, 2026 20:10

# Load the prompt using Prompt.load
puts "\nLoading prompt..."
prompt = Braintrust::Prompt.load(project: project_name, slug: prompt_slug)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if now when we load prompts we may need to check what templating language the prompt was using and fail to load the prompt if the templating language isn't available in a specific sdk.

private

# Render Mustache template with variables
def render_template(text, variables, strict:)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we want to support the no template so people don't have to escape double braces in their prompts if they have any?

Technically you can work around it in mustache by doing something like changing the delimiters by doing {{=<% %>=}} but might be nice to have no template option as well.

end
end

Mustache.render(text, variables)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recently found out we have custom escape logic for mustache.

It exists here in python and here in typescript.

We should move that custom escaping over to make the experience consistent.

Copy link
Collaborator

@delner delner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some suggestions for improvements in the design.

Braintrust.init

# Wrap OpenAI client for tracing
openai = Braintrust::Trace::OpenAI.wrap(OpenAI::Client.new)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be updated to match the new instrumentation API.

# Runtime dependencies
spec.add_runtime_dependency "opentelemetry-sdk", "~> 1.3"
spec.add_runtime_dependency "opentelemetry-exporter-otlp", "~> 0.28"
spec.add_runtime_dependency "mustache", "~> 1.0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a dependency on mustache? We should try to avoid external dependencies wherever possible (invites conflicts with user apps which limits where it can be deployed.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we use mustache for templates. what should i do instead?

# In practice, you would create prompts via the Braintrust UI
puts "Creating prompt..."

api = Braintrust::API.new
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify, we want API to be a first class public interface? It is not a support class for enabling other features (e.g. Evals)?

# @param state [State, nil] Braintrust state (default: global)
# @return [Prompt]
def self.load(project:, slug:, version: nil, defaults: {}, state: nil)
state ||= Braintrust.current_state
Copy link
Collaborator

@delner delner Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should a prompt define this dependency on state or should API? If API is meant to be a centrally used/reused public object perhaps we should be passing that in instead of State. The use of State seems like a detail that State should be an internal construct. I don't see Prompt using State beyond passing it through to API.

end
end

Mustache.render(text, variables)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps naive thought here, but can this not be just done with a simple gsub? Instead of using this dependency?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use mustache for templating in the UI when you create prompt.

We recently also added the ability to have no template or nunjucks (a jinja like templating).

This adds the same capability to the Ruby SDK that exists in the python and typescript SDK. If a user loads a prompt they created using mustache in the UI then it can be loaded from the SDK the same way.

private

# Render Mustache template with variables
def render_template(text, variables, strict:)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see anything specific to prompt in this function or variable manipulation methods. I think this should be extracted to Internal as a general utility.

# Clean up
api.functions.delete(id: prompt.id)
ensure
OpenTelemetry.tracer_provider.shutdown
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does prompt loading need to shutdown an OTel global after? My concern is that this is a sign of leaking global state/side effects which has been a major contributor to flaky failures in the test suite.

If at all possible, we should avoid creating the mutation in the global state (use mocks/stubs) where its not a part of the subject under test. (For example, State is not under test here.) Where it is not possible or practical, we can wrap the test with assert_in_fork to run the test in isolation. This should be used sparingly though, because although its a very strong guarantee global state will not escape, its not fast especially when repeated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants