The Superface tool function descriptions are available as JSON schema that follows the OpenAI tool definition. Although this format is common amongst other LLMs, there are some slight differences depending on the model you choose to use.

LangChain have tooling that standarizes how functions are passed to LLMs, as well as how the response from the model is handled, regardless of the model you want to use.

Using LangChain is our recommended approach if you are using LLMs such as Anthropic, Cohere or Gemini Pro.

Example breakdown

To begin working with LangChain, the core package must be installed.

pip install langchain

You will also need to install the wrapper for the API of the LLM that you want to use. For example, with Anthropic:

pip install langchain-anthropic

Setup

Then setup the necessary packages:

import json
import requests as r
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import AIMessage, HumanMessage

Helper function

A helper function is required to retrieve the function descriptions for the tools that have been added to your Superface account. You can read more about the fd endpoint in our Endpoints documentation.

SUPERFACE_BASE_URL = "<https://pod.superface.ai/api/hub>"
SUPERFACE_AUTH_TOKEN="<your-superface-auth-token>"

def get_superface_tools():
  headers = {"Authorization": "Bearer "+ SUPERFACE_AUTH_TOKEN}
  tools = r.get(SUPERFACE_BASE_URL + "/fd", headers=headers)
  return tools.json()

LLM setup

Next, set up the LLM you want to use (in this example we use Anthropic's Claude 3 Opus):

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-opus-20240229", temperature=0, api_key=ANTHROPIC_API_KEY)

Then bind the tools from Superface to the LLM using the helper function and LangChains bind_tools function:

tools = get_superface_tools()
llm_anthropic = llm.bind_tools(tools)