Initialization
To integrate with Runloop and your preferred LLM provider, initialize the respective SDK clients.
from anthropic import Anthropic
from runloop_api_client import Runloop
import os
# Initialize clients
anthropic = Anthropic( api_key =os.environ.get( "ANTHROPIC_API_KEY" ))
runloop = Runloop( bearer_token =os.environ.get( "RUNLOOP_API_KEY" ))
Defining Prompts
Defining a clear, actionable prompt ensures accurate LLM responses.
system_prompt = "You are a helpful coding assistant that can generate and execute python code. "
"You only respond with the code to be executed and nothing else."
"Strip backticks in code blocks."
prompt =
"Write a Python script that generates a maze. The script should:"
"1. Accept a size parameter from command line arguments"
"2. Generate a random maze of the specified size. Remember to make the maze solvable "
"and easy and to make it clear the outer borders of the maze."
"3. Print the maze where '#' represents walls and ' ' represents paths."
"Mark the Maze start with 'S' and end with 'E'"
"4. Use argparse for command line argument parsing"
"The code should be in the format of a Python script that can be run directly"
"with 'python gen_maze.py --size 5'."
"ONLY output the code and do NOT wrap the code in markdown! The code should begin "
"with an import and end with a print statement."
Generating Code
Send the defined prompts to the LLM’s message endpoint, configure parameters and extract the generated code from the response.
# Generate code using Claude
response = anthropic.messages.create(
model = "claude-3-5-sonnet-20240620" ,
max_tokens = 1024 ,
messages =[
{ "role" : "assistant" , "content" : system_prompt},
{ "role" : "user" , "content" : prompt}
]
)
maze_generation_script = response.content[ 0 ].text
Running Code on a Devbox
After retrieving the code from the LLM, execute it in a Runloop Devbox.
devbox = runloop.devboxes.create_and_await_running()
print ( "Devbox ID:" , devbox.id)
runloop.devboxes.write_file_contents(
devbox.id,
file_path = "gen_maze.py" ,
contents =code
)
result = runloop.devboxes.execute_sync(
devbox.id,
command = f "python gen_maze.py --size { size } "
)
if not result.exit_status:
print ( "Maze generated successfully \n " , result.stdout)
return result.stdout
else :
print ( "Script execution failed:" , result.stderr)
return result.stderr
Integrating with Popular Frameworks
The examples below show Integration of Runloop with popular frameworks and LLM providers.
The examples follow this structure:
Client Initialization : Set up SDK clients with environment variables.
Prompt Definition : Use pre-defined system and user prompts.
Code Generation : Generate code based on the prompts.
Execution : Run the code in a secure Runloop Devbox.
Prompts are defined above and reused across examples.
Handle the non-null ”!” operator in examples with default values or as needed.
TypeScript Integrations
Claude
Gemini
LangChain
LlamaIndex
Mistral
OpenAI
VercelAI
import Runloop from '@runloop/api-client' ;
import Anthropic from '@anthropic-ai/sdk' ;
// Initialize clients
const anthropic = new Anthropic ({ apiKey: process . env . ANTHROPIC_API_KEY ,});
const runloop = new Runloop ({ bearerToken: process . env . RUNLOOP_API_KEY });
async function generateMazeCreator () {
try {
const { content } = await anthropic . messages . create ({
model: "claude-3-5-sonnet-20241022" ,
max_tokens: 1000 ,
temperature: 0 ,
system: "Respond only with code. Do not include any markdown or comments." ,
messages: [{
"role" : "user" ,
"content" : [
{
"type" : "text" ,
"text" : prompt
}]
}
]
});
const mazeGenerationScript = ( content [ 0 ] as { text : string }). text ;
// Execute the script in a Devbox
const devbox = await runloop . devboxes . createAndAwaitRunning ();
console . log ( `Devbox ID: ${ devbox . id } ` );
await runloop . devboxes . writeFileContents ( devbox . id , {
file_path: "gen_maze.py" ,
contents: mazeGenerationScript ,
});
const { exit_status , stdout , stderr } = await runloop . devboxes . executeSync ( devbox . id , {
command: "python gen_maze.py --size 11" ,
});
exit_status === 0
? console . log ( "Maze generated successfully \n " , stdout )
: console . error ( "Maze generation failed \n " , stderr );
} catch ( error ) {
console . error ( "Error:" , error );
}
}
generateMazeCreator ();
Python Integrations
Claude
Gemini
LangChain
LlamaIndex
Mistral
OpenAI
CrewAI
from anthropic import Anthropic
from runloop_api_client import Runloop
import os
# Initialize clients
anthropic = Anthropic( api_key =os.environ.get( "ANTHROPIC_API_KEY" ))
runloop = Runloop( bearer_token =os.environ.get( "RUNLOOP_API_KEY" ))
def generate_maze_creator ():
try :
# Generate code using Claude
response = anthropic.messages.create(
model = "claude-3-5-sonnet-20240620" ,
max_tokens = 1024 ,
messages =[
{ "role" : "assistant" , "content" : system_prompt},
{ "role" : "user" , "content" : prompt}
]
)
maze_generation_script = response.content[ 0 ].text
# Execute the script in a Devbox
devbox = runloop.devboxes.create_and_await_running()
print ( "Devbox ID:" , devbox.id)
runloop.devboxes.write_file_contents(devbox.id,
file_path = "gen_maze.py" ,
contents = maze_generation_script
)
result = runloop.devboxes.execute_sync(devbox.id,
command = "python gen_maze.py --size 10"
)
if not result.exit_status:
print ( "Maze generated successfully \n " , result.stdout)
else :
print ( "Script execution failed:" , result.stderr)
except Exception as e:
print ( "Error:" , e)
if __name__ == "__main__" :
generate_maze_creator()