OpenClaw Skill

ml-experiment-tracker

Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.

Install

$npx clawhub@latest install ml-experiment-tracker
All-time installs4
Active installs4
Stars0

ML Experiment Tracker

Overview

Generate structured experiment plans that can be logged consistently in experiment tracking systems.

Workflow

  1. Define dataset, target task, model family, and parameter search space.
  2. Define metrics and acceptance thresholds before training.
  3. Produce run plan with version and artifact expectations.
  4. Export the run plan for execution in tracking tools.

Use Bundled Resources

  • Run scripts/build_experiment_plan.py to generate consistent run plans.
  • Read references/tracking-guide.md for reproducibility checklist.

Guardrails

  • Keep inputs explicit and machine-readable.
  • Always include metrics and baseline criteria.

Persistent memory

Give your OpenClaw agent a memory layer

Mem0 remembers users and context across sessions so you send fewer tokens and get better answers.

Try Mem0Mem0 + OpenClaw guide