Skip to main content

The Runloop Agent Runtime

The Runloop Agent Runtime enables you to deploy long running Agents with no timeouts that can interact with devboxes, other agents on the platform, or with your other applications via API.

Creating an agent for deployment with Runloop

Writing an agent

First, let’s create a new repository that will house your agent code. To start, you can fork the example repository here. Writing an agent as a Runloop lambda is as easy as annotating a normal python function with the @runloop.function decorator. Any function that is annotated with @runloop.function will be automatically deployed as a lambda on Runloop’s serverless platform. The parameters of the function will automatically become the request schema for the agent and the return type of the function will automatically become the response schema.
All function annotated with @runloop.function must explicitly declare all parameters and return types.
Additionally, any function that is annotated with @runloop.function can declare a parameter of type runloop.SystemCoordinator which will automatically give your function access to the full suite of Runloop tools. This includes the ability to create devboxes that can be used as isolated environments to run arbitrary code in.
import runloop

# Lambda functions can take in arbitrary inputs and return arbitrary outputs.
# They will automatically run in a container with all the dependencies needed.
@runloop.function
def example_agent(name: str, system_coordinator: runloop.SystemCoordinator) -> str:
    # We create a devbox that is tied to the run of this agent. It will be automatically shutdown when the agent is done.
    devbox = system_coordinator.create_devbox()

    # We can use shell tools and file tools to interact with the devbox including running arbitrary commands and writing and reading files.
    # You can use the devbox to run arbitrary code generated by your AI agent in a secure and isolated environment.
    exec_result = devbox.shell_tools.exec("echo 'executed hello from devbox'")

    return f"Devbox ID: {devbox.id}\nExec Result: {exec_result}. User: {name}"

Telling Runloop where to find your agent code

Now that we have the agent code written, we need to tell Runloop where to find it so it can be automatically deployed. All python agent repositories need to contain two files:
  1. a requirements.txt file to declare dependencies for your agent
  2. a runloop.toml file to declare the path to your agent code
For example, if your agent code is in a file called bot.py inside afolder called agent at the root of your repository, your runloop.toml file should look like this:
[module]
path = "./agent"
name = "bot"

Deploying you first long running Agent

Now that we have written our agent code and indicated where in our repository to find our agent code, let’s deploy it onto Runloop’s serverless platform.

Install the Runloop Github Application

Install the Runloop Github Application and give it access to your repository. Make sure to give access to the repository with your agent code. https://github.com/apps/runloopai/installations/new After installing the app you should be redirected the Runloop dashboard. You can go to the deployments page and see your agent deploying successfully. If there are any errors, you can click the details button for that deployment to see the logs of the failed deployment and fix any issues.

Invoking your agent

Creating an API key

Now that your agent is deployed, you can create an API key to invoke it. Go to the API keys page and create a new API key. Then set the RUNLOOP_API_KEY environment variable to your new API key.
export RUNLOOP_API_KEY=<YOUR_API_KEY>

Invoking your agent

With your agent now deployed, you can now invoke it via HTTP request. The agent is addressed by the name of the repository and the name of the agent function.
curl -X POST https://api.runloop.ai/v1/functions/example_python_agent/example_agent/invoke_sync -H "Authorization: Bearer $RUNLOOP_API_KEY" -H "Content-Type: application/json" --data '{"request": {"name": "Bob"}}'

Long Running Lambda Invocations

Querying the status and results of agent runs

Alternatively, if you expect your agent to take a long time to run, you can also invoke your agent functions asynchronously and query the current state of the agent via the Runloop API.
curl -X POST https://api.runloop.ai/v1/functions/example_python_agent/example_agent/invoke_async -H "Authorization: Bearer $RUNLOOP_API_KEY" -H "Content-Type: application/json" --data '{"request": {"name": "Bob"}}'
Now you can query the current state of that invocation using the invocation ID returned from the previous call:
curl -G https://api.runloop.ai/v1/functions/invocations/<YOUR_INVOCATION_ID> -H "Authorization: Bearer $RUNLOOP_API_KEY"

Debugging your agent

Any agent run is automatically logged to the Runloop dashboard. The request and response of the agent run are recorded along with any agent logs and can be observed directly in the dashboard. You can checkout the logs for the agent invocation in the dashboard!