In this tutorial, we will explore how to leverage the capabilities of Fireworks AI for building intelligent, tool-enabled agents with LangChain. Starting from installing the langchain-fireworks package and configuring your Fireworks API key, we’ll set up a ChatFireworks LLM instance, powered by the high-performance llama-v3-70b-instruct model, and integrate it with LangChain’s agent framework. Along the way, we’ll define custom tools such as a URL fetcher for scraping webpage text and an SQL generator for converting plain-language requirements into executable BigQuery queries. By the end, we’ll have a fully functional REACT-style agent that can dynamically invoke tools, maintain conversational memory, and deliver sophisticated, end-to-end workflows powered by Fireworks AI.
!pip install -qU langchain langchain-fireworks requests beautifulsoup4
We bootstrap the environment by installing all the required Python packages, including langchain, its Fireworks integration, and common utilities such as requests and beautifulsoup4. This ensures that we have the latest versions of all necessary components to run the rest of the notebook seamlessly.
import requests
from bs4 import BeautifulSoup
from langchain.tools import BaseTool
from langchain.agents import initialize_agent, AgentType
from langchain_fireworks import ChatFireworks
from langchain import LLMChain, PromptTemplate
from langchain.memory import ConversationBufferMemory
import getpass
import os
We bring in all the necessary imports: HTTP clients (requests, BeautifulSoup), the LangChain agent framework (BaseTool, initialize_agent, AgentType), the Fireworks-powered LLM (ChatFireworks), plus prompt and memory utilities (LLMChain, PromptTemplate, ConversationBufferMemory), as well as standard modules for secure input and environment management.
os.environ["FIREWORKS_API_KEY"] = getpass("🚀 Enter your Fireworks API key: ")
Now, it prompts you to enter your Fireworks API key via getpass securely and sets it in the environment. This step ensures that subsequent calls to the ChatFireworks model are authenticated without exposing your key in plain text.
llm = ChatFireworks(
model="accounts/fireworks/models/llama-v3-70b-instruct",
temperature=0.6,
max_tokens=1024,
stop=["\n\n"]
)
We demonstrate how to instantiate a ChatFireworks LLM configured for instruction-following, utilizing llama-v3-70b-instruct, a moderate temperature, and a token limit, allowing you to immediately start issuing prompts to the model.
prompt = [
{"role":"system","content":"You are an expert data-scientist assistant."},
{"role":"user","content":"Analyze the sentiment of this review:\n\n"
"\"The new movie was breathtaking, but a bit too long.\""}
]
resp = llm.invoke(prompt)
print("Sentiment Analysis →", resp.content)
Next, we demonstrate a simple sentiment-analysis example: it builds a structured prompt as a list of role-annotated messages, invokes llm.invoke(), and prints out the model’s sentiment interpretation of the provided movie review.
template = """
You are a data-science assistant. Keep track of the convo:
{history}
User: {input}
Assistant:"""
prompt = PromptTemplate(input_variables=["history","input"], template=template)
memory = ConversationBufferMemory(memory_key="history")
chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
print(chain.run(input="Hey, what can you do?"))
print(chain.run(input="Analyze: 'The product arrived late, but support was helpful.'"))
print(chain.run(input="Based on that, would you recommend the service?"))
We illustrate how to add conversational memory, which involves defining a prompt template that incorporates past exchanges, setting up a ConversationBufferMemory, and chaining everything together with LLMChain. Running a few sample inputs shows how the model retains context across turns.
class FetchURLTool(BaseTool):
name: str = "fetch_url"
description: str = "Fetch the main text (first 500 chars) from a webpage."
def _run(self, url: str) -> str:
resp = requests.get(url, timeout=10)
doc = BeautifulSoup(resp.text, "html.parser")
paras = [p.get_text() for p in doc.find_all("p")][:5]
return "\n\n".join(paras)
async def _arun(self, url: str) -> str:
raise NotImplementedError
We define a custom FetchURLTool by subclassing BaseTool. This tool fetches the first few paragraphs from any URL using requests and BeautifulSoup, making it easy for your agent to retrieve live web content.
class GenerateSQLTool(BaseTool):
name: str = "generate_sql"
description: str = "Generate a BigQuery SQL query (with comments) from a text description."
def _run(self, text: str) -> str:
prompt = f"""
-- Requirement:
-- {text}
-- Write a BigQuery SQL query (with comments) to satisfy the above.
"""
return llm.invoke([{"role":"user","content":prompt}]).content
async def _arun(self, text: str) -> str:
raise NotImplementedError
tools = [FetchURLTool(), GenerateSQLTool()]
agent = initialize_agent(
tools,
llm,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
result = agent.run(
"Fetch https://en.wikipedia.org/wiki/ChatGPT "
"and then generate a BigQuery SQL query that counts how many times "
"the word 'model' appears in the page text."
)
print("\n🔍 Generated SQL:\n", result)
Finally, GenerateSQLTool is another BaseTool subclass that wraps the LLM to transform plain-English requirements into commented BigQuery SQL. It then wires both tools into a REACT-style agent via initialize_agent, runs a combined fetch-and-generate example, and prints out the resulting SQL query.
In conclusion, we have integrated Fireworks AI with LangChain’s modular tooling and agent ecosystem, unlocking a versatile platform for building AI applications that extend beyond simple text generation. We can extend the agent’s capabilities by adding domain-specific tools, customizing prompts, and fine-tuning memory behavior, all while leveraging Fireworks’ scalable inference engine. As next steps, explore advanced features such as function-calling, chaining multiple agents, or incorporating vector-based retrieval to craft even more dynamic and context-aware assistants.
Check out the Notebook here. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.