-
Notifications
You must be signed in to change notification settings - Fork 165
Description
Describe the bug
When using openinference-instrumentation-agno with Agno agents in streaming mode, the OpenInference instrumentation sometimes fails to set the output.value span attribute on Agent.run spans.
Because output.value is missing, Langfuse (via OTLP) displays the output as undefined, even though the agent produces a valid final answer.
Root cause appears to be that the instrumentation only uses RunOutput.content and only sets output.value when content is non‑None. In streaming mode content can be None or incomplete while the real output is available via other fields on RunOutput (e.g. get_content_as_string() / messages).
To Reproduce
-
Install:
agno==2.3.2
openinference-instrumentation-agno==0.1.23
opentelemetry-sdk
opentelemetry-exporter-otlp-proto-http
2. Configure OTLP → Langfuse (or any OTLP backend):from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from openinference.instrumentation.agno import AgnoInstrumentortracer_provider = TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(tracer_provider)AgnoInstrumentor().instrument()
3. Run an Agno agent with streaming:from agno.agent import Agent
from agno.models.google import Gemini
from agno.tools import tool
import os@tool
def add(a: float, b: float) -> float:
return a + bmodel = Gemini(
id="gemini-2.5-flash",
project_id=os.getenv("GOOGLE_CLOUD_PROJECT_ID"),
location=os.getenv("GOOGLE_CLOUD_LOCATION", "us-central1"),
vertexai=True,
include_thoughts=True,
)agent = Agent(
model=model,
tools=[add],
reasoning=True,
markdown=True,
debug_mode=True,
)for chunk in agent.run(
"I need to calculate: Start with 100, add 25, then subtract 15",
stream=True,
):
pass
4. Look at the exported spans in:- Langfuse (OTLP traces), or
- A custom OTLP/console exporter.
-
Inspect the
Agent.runspan:output.mime_typeis set- But
output.valueis often missing or empty for some runs.
Expected behavior
output.value(SpanAttributes.OUTPUT_VALUE, semantic nameoutput.value) should always be set onAgent.runspans to a string representation of the finalRunOutput(or empty string at minimum).- Downstream backends like Langfuse should never see “undefined” when a final answer exists; they should see the model/agent output.
This matches OpenInference’s semantic conventions for LLM / agent traces.
Screenshots
- Langfuse traces show:
- Inputs correctly populated
- Spans for
Agent.runpresent - Output field displayed as
undefinedbecauseoutput.valueis not present on the span.
(You can attach your actual screenshots here.)
Desktop (please complete the following information):
- OS: macOS (e.g. Sonoma 14.x)
- Python: 3.11.x
- Agno:
2.3.2 - openinference-instrumentation-agno:
0.1.23 - opentelemetry-sdk: (version you’re using)
- opentelemetry-exporter-otlp-proto-http: (version you’re using)
Additional context
Current implementation (problematic)
In _runs_wrapper.py:
def _extract_run_response_output(run_response: Union[RunOutput, TeamRunOutput]) -> str:
if run_response and run_response.content:
if isinstance(run_response.content, str):
return run_response.content
else:
return str(run_response.content.model_dump_json())
return ""And in run_stream:
if run_response is not None:
if run_response.content is not None:
span.set_attribute(OUTPUT_VALUE, _extract_run_response_output(run_response))
span.set_attribute(OUTPUT_MIME_TYPE, JSON)
span.set_status(trace_api.StatusCode.OK)Issues:
RunOutput.contentcan beNonein streaming mode even when there is a valid final output.RunOutputexposes other ways to get the output:get_content_as_string()messages(assistant messages)
- Because of the
if run_response.content is not Noneguard,output.valueis sometimes never set.
Observed behavior with a debug exporter
Using a custom SpanExporter, I can see:
-
Some
Agent.runspans have:output.value: ""
output.mime_type: "application/json"- Other
Agent.runspans have:
output.value: "" # or completely missing
output.mime_type: "application/json"
Those empty/missing cases correspond to the “undefined” output in Langfuse. - Other
Proposed direction
-
Update
_extract_run_response_outputto:- Try
content - Fallback to
get_content_as_string() - Fallback to
messages(last assistant message) - Always return a string (possibly empty).
- Try
-
Update
run_stream/arun_streamto:- Always set
OUTPUT_VALUE(noif run_response.content is not Noneguard). - Optionally fall back to
instance.get_last_run_output()whenrun_responseisNoneor has emptycontent.
- Always set
I have a working local change that fixes the issue end‑to‑end (Langfuse no longer shows “undefined”) and I’m happy to open a PR if this direction makes sense.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status