OpenClaw Skill
ml-experiment-tracker
Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.
Install
$npx clawhub@latest install ml-experiment-tracker
View on GitHubv0.1.0
All-time installs4
Active installs4
Stars0
ML Experiment Tracker
Overview
Generate structured experiment plans that can be logged consistently in experiment tracking systems.
Workflow
- Define dataset, target task, model family, and parameter search space.
- Define metrics and acceptance thresholds before training.
- Produce run plan with version and artifact expectations.
- Export the run plan for execution in tracking tools.
Use Bundled Resources
- Run
scripts/build_experiment_plan.pyto generate consistent run plans. - Read
references/tracking-guide.mdfor reproducibility checklist.
Guardrails
- Keep inputs explicit and machine-readable.
- Always include metrics and baseline criteria.
Created by
@0x-professorPersistent memory
Give your OpenClaw agent a memory layer
Mem0 remembers users and context across sessions so you send fewer tokens and get better answers.
Try Mem0Mem0 + OpenClaw guide