New to ModelKnife?
Learn why teams choose ModelKnife
Learn MoreThe Swiss‑Army Knife for ML Workflow - Develop, deploy and scale ML systems on AWS with simple YAML. From prototype to production without infrastructure complexity.
name: ml-pipeline
author: ml-team
executors:
python_processor:
type: sagemaker_processor
services:
api:
type: lambda_function
repository: ../services/
modules:
data_prep:
repository: ../modules
executor: ${executors.glue_etl}
training:
repository: ../modules
executor: ${executors.python_processor}
depends_on: [data_prep]
Find the right starting point for your ML journey
Learn why teams choose ModelKnife
Learn MoreGet your first pipeline running in 5 minutes
Quick StartExplore real-world ML workflows
View ExamplesBuild production ML systems without becoming an AWS expert. ModelKnife handles infrastructure complexity so you can focus on what matters: your models and data.
Single YAML file with intelligent defaults handles complex AWS infrastructure
Automatic IAM roles, VPC setup, and encryption without configuration
From prototype to production with the same simple configuration
Built for ML teams who need to deploy and scale production workflows across multiple AWS services with confidence. One tool with everything you need for multi‑service ML orchestration.
Seamlessly coordinate AWS Glue, SageMaker, DynamoDB, Lambda, and Step Functions in unified workflows.
Use Python, Go, Node.js, Java, and Scala together in the same pipeline with automatic build system management.
Auto-generates IAM roles, S3 buckets, and security groups with minimal specification required.
Separate lifecycle management for stable infrastructure and rapidly iterating ML pipelines.
Timezone-aware cron scheduling with automatic DST handling and EventBridge integration.
Role-based permissions with AWS IAM groups for developers and administrators.
ModelKnife separates stable infrastructure from rapidly iterating ML workflows, reflecting how ML teams actually work
mk s
ServicesInfrastructure services for model serving and real-time inference
mk p
PipelinesComplete ML lifecycle workflows from raw data to production models
From installation to your first ML pipeline deployment
pip install git+ssh://git@github.com/naoo-AI/modelknife.git
mk setup init
Creates IAM groups, roles, and AWS resources
touch mlknife-compose.yaml
Create your pipeline configuration file
mk s deploy && mk p deploy && mk p run
Deploy infrastructure, pipeline, and execute workflow
Working examples for common ML use cases
Multi-service ML pipeline with Glue ETL, SageMaker training, and Spark processing for product recommendations.
Complete search infrastructure with OpenSearch, Lambda APIs, and multilingual embedding support.
Simple service deployment example showing DynamoDB table creation with automatic security configuration.
Complete guides and API reference for ModelKnife