Run and monitor ATFT-GAT-FAN training loops, hyper-parameter sweeps, and safety modes on A100 GPUs.
ls -lh output/ml_dataset_latest_full.parquet then python scripts/utils/dataset_guard.py --assert-recency 72.tools/project-health-check.sh --section training.nvidia-smi --query-gpu=utilization.gpu,memory.used --format=csv (target >60% util, <76GB used baseline).git status --short ensure working tree state is understood (avoid accidental overrides during long runs).make train-optimized DATASET=output/ml_dataset_latest_full.parquet — compiles TorchInductor + FlashAttention2.make train-monitor — tails _logs/training/train-optimized.log.make train-status — polls background process; ensure ETA < 7h.python scripts/eval/aggregate_metrics.py runs/latest — compute Sharpe, RankIC, hit ratios.results/latest_training_summary.md.make train-quick EPOCHS=3 — run in foreground.python scripts/smoke_test.py --max-epochs 1 --subset 512 for additional regression guard.pytest tests/integration/test_training_loop.py::test_forward_backward if suspicious gradients.make train-safe — disables compile, single-worker dataloading.make train-stop if hung jobs detected (consult _logs/training/pids/).python scripts/integrated_ml_training_pipeline.py --profile --epochs 2 --no-compile — capture flamegraph to benchmark_output/.mlflow backend running if required (make mlflow-up).make hpo-run HPO_TRIALS=24 HPO_STUDY=atft_prod_lr_sched — uses Optuna integration.make hpo-status — track trial completions.configs/training/atft_prod.yaml and document in EXPERIMENT_STATUS.md._logs/training/*.log (includes gradient norms, learning rate schedule, GPU temp).runs/<timestamp>/metrics.jsonl.models/checkpoints/<timestamp>/epoch_###.pt.watch -n 30 nvidia-smi or python tools/gpu_monitor.py --pid $(cat _logs/training/pids/train.pid).make train-safe with FP32=1, inspect runs/<ts>/nan_batches.json.make dataset-gpu GRAPH_WINDOW=90 or enable PyTorch compile caching.GRADIENT_ACCUMULATION_STEPS=2 or reduce BATCH_SIZE; confirm memory fragments via python tools/gpu_memory_report.py.configs/training/schedule.yaml; run pytest tests/unit/test_loss_functions.py../tools/codex.sh --max "Design a new learning rate policy for ATFT-GAT-FAN" when novel optimizer or architecture strategy is required.codex exec --model gpt-5-codex "Analyze runs/<timestamp>/metrics.jsonl and suggest fixes" for automated postmortems.results/training_runs/ and update config files/documents accordingly.results/training_runs/<timestamp>.md noting dataset hash and commit SHA.models/artifacts/ with naming gatfan_<date>_Sharpe<score>.pt.docs/research/changelog.md.