Skip to content

thearchitector/ociapp

Repository files navigation

OCIApp

OCIApp is a framework for building and running dependency-sandboxed Python applications using OCI artifacts.

API Usage

OCIApp has 3 phases:

  1. define an Application with some custom task / handler / etc.
  2. build it as an .ociapp artifact with ociapp-build
  3. distribute the .ociapp to some external party (or yourself)
  4. execute it with Runtime.execute.

Define an application

from ociapp import Application
from pydantic import BaseModel


class EchoRequest(BaseModel):
    value: str


class EchoResponse(BaseModel):
    value: str


class EchoApplication(Application[EchoRequest, EchoResponse]):
    async def execute(self, request: EchoRequest) -> EchoResponse:
        return EchoResponse(value=request.value)


app = EchoApplication()

Build an artifact

Declare the build entrypoint in pyproject.toml:

[tool.ociapp-build]
mode = "managed"
entrypoint = "echo_app.main:app"
system-packages = ["vim"]

# [advanced] or if you want to use a custom Containerfile (Dockerfile)

[tool.ociapp-build]
mode = "custom"
containerfile = "Containerfile.custom"

Note: With a "custom" build, you're responsible for ensuring the container has proper user permissions, includes your application, and runs ociapp serve --app on boot.

Then package it into a distributable .ociapp archive:

ociapp-build . --output-dir dist

Execute an artifact

from pathlib import Path

from ociapp_runtime import Runtime, DockerAdapter

async with (
    Runtime(
        engine=DockerAdapter(),  # engine implementation (default DockerAdapter)
        startup_timeout=10,  # max time to wait for a container to start before failing (default 10s)
        request_timeout=30,  # max time for a execution request to complete (default 30s)
        shutdown_timeout=10,  # max time to wait for a container to gracefully stop before killing it (default 10s)
        idle_timeout=900,  # duration to keep an idle container running before stopping it (default 900s)
        reaper_interval=1,  # how frequently to check for and reap idle containers (default 1s)
    ) as runtime
):
    response = await runtime.execute(
        Path("dist/echo-app-0.1.0.ociapp"), {"value": "hello"}
    )
    print(response)
    # ==> {"value": "hello"}

See example/echo-app and example/runtime_demo.py for more detail.

How it works

OCIApp encompases 3 packages. The uv workspace uses ociapp-runtime as the root package, while ociapp and ociapp-build remain under packages/.

  • ociapp: An SDK for defining arbitrary Python applications
  • ociapp-build: A standalone CLI that builds .ociapp OCI archives via Docker Buildx
  • ociapp-runtime: A library for implementing OCIApp Runtimes, which handle spinning up and managing a pool of application containers to fulfill execution requests.

ociapp exposes an Application class, which expects an implemention of an single execute method. It also includes the ociapp serve command, which ociapp-build uses as OCI artifact's ENTRYPOINT; it spins up a UDS server that listens for, and responds to, execution requests by actually invoking .execute.

ociapp-runtime primarily implements the Runtime, which:

  • exposes a simple .execute interface for running the execute of a provided OCI artifact with a given request payload.
  • a warm pool of application containers, which it manages by spinning up new ones as needed and tearing down idle ones.
  • a UDS client for every active application container.

Request Flow

sequenceDiagram
    autonumber
    actor Host
    participant RT as Runtime
    participant C as Container
    participant S as App Server
    participant App as Application

    Host->>RT: execute request

    opt no warm container
        RT->>C: start container
        C->>S: boot app server
        RT->>C: wait for UDS
        RT->>C: open session
    end

    RT->>C: send request over UDS
    C->>S: receive request
    S->>App: validate + execute
    App-->>S: return response
    S-->>C: send response over UDS
    C-->>RT: receive response
    RT-->>Host: return result

    opt idle timeout or shutdown
        RT->>C: stop container
    end
Loading

TODO

  1. Decouple Runtime ownership from execute so that the runtime can exist on a separate host
    • a LocalExecutor for when the runtime exists on the same machine
    • a RemoteExecutor for when the runtime is decoupled
  2. Refactor Runtime and Engine to:
    • support local on-machine docker via LocalEngine
    • a K8Engine to support a node-local resource that spins up K8 containers.

About

Run sandboxed container applications from Python

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages