LLM Evaluation

Evaluated by: xiaomi/mimo-v2-flash:free

Last evaluated: March 29, 2026

Prompt Quality

3.0 /5

Evaluation error: RetryError[]

Usefulness

3.0 /5

Evaluation error: RetryError[]

Overall Rating

3.0 /5

Evaluation failed

Prompt Preview

---
name: track-ml-experiments
description: >
  Set up MLflow tracking server for experiment management, configure autologging
  for popular ML frameworks, compare runs with metrics and visualizations, and
  manage artifacts in remote storage backends for reproducible machine learning workflows.
  Use when starting a new ML project that requires experiment tracking, migrating from
  manual logs to automated tracking, comparing multiple training runs systematically, or
  building reproducible ML...

Full prompt length: 9908 characters

Tools & Technologies

  • SQLite
  • sqlite
  • Python
  • docker
  • PostgreSQL
  • Azure
  • Docker
  • MySQL
  • python
  • aws