From Prototype to Production in Minutes

The Swiss‑Army Knife for ML Workflow - Develop, deploy and scale ML systems on AWS with simple YAML. From prototype to production without infrastructure complexity.

15 AWS Services
4 Languages
1 YAML File
Enterprise Ready
AWS Native
Team Collaboration
Supports:
SageMaker Bedrock Glue Lambda OpenSearch +10 more

Choose Your Path

Find the right starting point for your ML journey

New to ModelKnife?

Learn why teams choose ModelKnife

Learn More

Ready to Deploy?

Get your first pipeline running in 5 minutes

Quick Start

Focus on ML, Not Infrastructure

Build production ML systems without becoming an AWS expert. ModelKnife handles infrastructure complexity so you can focus on what matters: your models and data.

Deploy in Minutes

Single YAML file with intelligent defaults handles complex AWS infrastructure

Security by Default

Automatic IAM roles, VPC setup, and encryption without configuration

Scale Without Complexity

From prototype to production with the same simple configuration

Traditional AWS Setup Days to weeks

  • Research AWS service documentation
  • Configure 10+ services with dependencies
  • Write IAM policies and roles
  • Debug networking and permissions
  • Build custom orchestration code

With ModelKnife 5 minutes

  • Single YAML configuration
  • Automatic security setup
  • Built-in orchestration
  • Deploy with one command

Why Choose ModelKnife?

Built for ML teams who need to deploy and scale production workflows across multiple AWS services with confidence. One tool with everything you need for multi‑service ML orchestration.

Multi-Service Orchestration

Seamlessly coordinate AWS Glue, SageMaker, DynamoDB, Lambda, and Step Functions in unified workflows.

Multi-Language Support

Use Python, Go, Node.js, Java, and Scala together in the same pipeline with automatic build system management.

Convention over Configuration

Auto-generates IAM roles, S3 buckets, and security groups with minimal specification required.

Dual-Mode Architecture

Separate lifecycle management for stable infrastructure and rapidly iterating ML pipelines.

Smart Scheduling

Timezone-aware cron scheduling with automatic DST handling and EventBridge integration.

Team Access Control

Role-based permissions with AWS IAM groups for developers and administrators.

Dual-Mode Architecture

ModelKnife separates stable infrastructure from rapidly iterating ML workflows, reflecting how ML teams actually work

mk s Services

Infrastructure

Infrastructure services for model serving and real-time inference

  • DynamoDB Tables
  • Lambda Functions
  • API Gateway
  • OpenSearch Domains
Deploy Frequency: Weekly/Monthly

mk p Pipelines

ML Workflows

Complete ML lifecycle workflows from raw data to production models

  • SageMaker Jobs
  • Glue ETL
  • Step Functions
  • Spark Processing
Deploy Frequency: Daily/Hourly

Get Started in 5 Minutes

From installation to your first ML pipeline deployment

  1. 1

    Install ModelKnife

    pip install git+ssh://git@github.com/naoo-AI/modelknife.git
  2. 2

    Initialize Team Setup

    mk setup init

    Creates IAM groups, roles, and AWS resources

  3. 3

    Create Configuration

    touch mlknife-compose.yaml

    Create your pipeline configuration file

  4. 4

    Deploy & Run

    mk s deploy && mk p deploy && mk p run

    Deploy infrastructure, pipeline, and execute workflow

Real-World Examples

Working examples for common ML use cases

ML Recommendations

Complete Pipeline

Multi-service ML pipeline with Glue ETL, SageMaker training, and Spark processing for product recommendations.

Glue SageMaker Spark
View Example

Semantic Search Engine

Infrastructure

Complete search infrastructure with OpenSearch, Lambda APIs, and multilingual embedding support.

OpenSearch Lambda Bedrock
View Example

Basic DynamoDB Setup

Infrastructure

Simple service deployment example showing DynamoDB table creation with automatic security configuration.

DynamoDB IAM
View Example

Documentation

Complete guides and API reference for ModelKnife