Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to rm collect_runs #459

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -223,59 +223,6 @@ await chain.invoke({ input: "What is the meaning of life?" }, { runId: myUuid })

Note that if you do this at the **root** of a trace (i.e., the top-level run, that run ID will be used as the `trace_id`).

## Access run (span) ID for LangChain invocations

When you invoke a LangChain object, you can access the run ID of the invocation. This run ID can be used to query the run in LangSmith.

In Python, you can use the `collect_runs` context manager to access the run ID.

In JS/TS, you can use a `RunCollectorCallbackHandler` instance to access the run ID.

<CodeTabs
tabs={[
PythonBlock(`from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.tracers.context import collect_runs\n
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Please respond to the user's request only based on the given context."),
("user", "Question: {question}\\n\\nContext: {context}")
])
model = ChatOpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()\n
chain = prompt | model | output_parser\n
question = "Can you summarize this morning's meetings?"
context = "During this morning's meeting, we solved all world conflict."
with collect_runs() as cb:
result = chain.invoke({"question": question, "context": context})
# Get the root run id
# highlight-next-line
run_id = cb.traced_runs[0].id
print(run_id)`),
TypeScriptBlock(`import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunCollectorCallbackHandler } from "@langchain/core/tracers/run_collector";\n
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant. Please respond to the user's request only based on the given context."],
["user", "Question: {question\\n\\nContext: {context}"],
]);
const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo" });
const outputParser = new StringOutputParser();\n
const chain = prompt.pipe(model).pipe(outputParser);
const runCollector = new RunCollectorCallbackHandler();\n
const question = "Can you summarize this morning's meetings?"
const context = "During this morning's meeting, we solved all world conflict."
await chain.invoke(
{ question: question, context: context },
{ callbacks: [runCollector] }
);
// highlight-next-line
const runId = runCollector.tracedRuns[0].id;
console.log(runId);`),
]}
groupId="client-language"
/>

## Ensure all traces are submitted before exiting

Expand Down Expand Up @@ -329,20 +276,20 @@ This largely builds off of the [previous section](#trace-selectively).
<CodeTabs
tabs={[
PythonBlock(`from langchain.callbacks.tracers import LangChainTracer
from langsmith import Client\n
import langsmith as ls\n
# You can create a client instance with an api key and api url
client = Client(
client = ls.Client(
api_key="YOUR_API_KEY", # This can be retrieved from a secrets manager
api_url="https://api.smith.langchain.com", # Update appropriately for self-hosted installations or the EU region
)\n
# You can pass the client and project_name to the LangChainTracer instance
# highlight-next-line
tracer = LangChainTracer(client=client, project_name="test-no-env")
chain.invoke({"question": "Am I using a callback?", "context": "I'm using a callback"}, config={"callbacks": [tracer]})\n
# LangChain Python also supports a context manager which allows passing the client and project_name
from langchain_core.tracers.context import tracing_v2_enabled
# LangSmith python also supports a context manager which allows passing the client and project_name
import langsmith as ls
# highlight-next-line
with tracing_v2_enabled(client=client, project_name="test-no-env"):
with ls.tracing_context(client=client, project_name="test-no-env", enabled=True):
chain.invoke({"question": "Am I using a context manager?", "context": "I'm using a context manager"})`),
TypeScriptBlock(`import { LangChainTracer } from "@langchain/core/tracers/tracer_langchain";
import { Client } from "langsmith";\n
Expand Down
Loading