Autonomy · Benchmarks · April 2026

How Long Can Claude Mythos Work Alone?

Opus 4.6 can complete human tasks that take 12 hours. We estimate Mythos could handle tasks that take 16.

Sam Donahue · April 7, 2026 · Written before METR evaluation of Mythos

METR measures the longest human task an AI model can autonomously complete — writing code, debugging systems, running experiments. It is a measure of task complexity, not runtime: a 12-hour time horizon means the model can solve problems that would take a human roughly 12 hours, not that the model runs for 12 hours. The current leader is Opus 4.6 at ~12 hours. Mythos has no METR score yet.

We regress aggregate capability scores (IRT) against METR time horizons for 19 models. On reasoning-era models (n=14, R²=0.977), the fit predicts Mythos could complete human tasks of roughly 16 hours — a 33% increase over Opus 4.6.

Reasoning-era models: Mythos predicted at ~16-hour task complexity
Post-o1 models only (n=14, R²=0.977). Linear and quadratic converge. Faded points = pre-o1 models (not in fit). Error bars = METR CIs.
Data: METR-Horizon-v1.1 × Self-reported IRT (Ho et al.). Band = 90% bootstrap CI (400 resamples). Y-axis = human-equivalent task duration.
~16h
Predicted task complexity
(median, post-o1 fit)
12h
Current METR leader
(Opus 4.6)
+33%
Predicted increase
over Opus 4.6
0.977
R² (post-o1 fit)
LOO error: 9–11%

Estimates at a glance

MethodEstimate (human-task hours)Source
IRT regression, post-o1 (n=14)~16h (linear & quadratic converge)This analysis
IRT regression, all models (n=19)15.9–27.1hThis analysis
Alternative fits (power law, sigmoid, piecewise)10–17h (4 of 6 fits)This analysis
Individual benchmark ensemble (6 benchmarks)10.5h median (5.4–18.1h)This analysis
Anthropic internal task evals40h-equiv. on 2/3 of tasksSystem Card p. 34
Anthropic qualitative assessment"Not close" to replacing engineersSystem Card p. 45

Our estimates cluster around 10–17 hours across methods. Anthropic's internal task evaluations (40h-equivalent) are not directly comparable — they measure speedup on narrow tasks, not sustained autonomous completion of complex work. Their qualitative finding ("not close" to engineer replacement) is consistent with a model that handles day-length tasks but cannot reliably sustain multi-day autonomous work.


Background: METR and IRT

METR (Model Evaluation & Threat Research) evaluates AI autonomy by giving models real-world tasks of increasing complexity — coding challenges, system debugging, research experiments. The time horizon is the human-equivalent duration of the most complex task the model can complete. It reports two thresholds:

Modelp50 (task hours)Release
Claude Opus 4.6~12 hoursFeb 2026
GPT-5.2~5.9 hoursDec 2025
GPT-5.3 Codex~5.8 hoursFeb 2026
Claude Opus 4.5~4.9 hoursNov 2025
Gemini 3 Pro~3.7 hoursNov 2025

Source: Epoch AI / METR-Horizon-v1.1, retrieved March 21, 2026. Mythos has no METR score yet.

Item Response Theory (IRT) aggregates many benchmark scores into a single ability parameter per model, simultaneously estimating each benchmark's difficulty (Ho et al., as implemented by the Epoch Capabilities Index). We use self-reported IRT (from labs' own benchmark results) because Mythos's score of 186.6 is on that scale. Mixing self-reported with third-party IRT produces unstable extrapolations due to a systematic ~6–11 point gap between the scales.

We fit log(METR minutes) = a + b·IRT + c·IRT² to the 19 models with both METR and IRT scores, testing across three regime cutoffs and six functional forms.

Prior work: Epoch AI's baseline

Ho et al. ("A Rosetta Stone for AI Benchmarks", Section 3.1.1) established the baseline for this approach. They fit a linear map from their estimated capability score (Cm) to log(time horizon), finding:

time_horizon = exp(3.69 × Cm − 4.58)

R² = 0.85 on a 40% held-out validation set, using a 60/40 train-test split. This outperformed individual benchmarks (median R² = 0.62 across 18 benchmarks; GPQA Diamond alone R² = 0.75). They note that some benchmarks, including SWE-Bench Verified, are "actively detrimental" for predicting time horizons (footnote 8, p. 5).

Our approach differs in three ways:

MethodValidationnMythos
Epoch (Ho et al.) — linear, Cm scale0.8540% holdout (1 trial)~15–20
Ours — linear, all models0.946LOO: 22%1915.9h
Ours — linear, all models0.909*40% holdout (1000 trials)1915.4h [12.4–23.4]
Ours — linear, post-o10.977LOO: 9%1416.4h
Ours — linear, post-o10.937*40% holdout (1000 trials)1417.8h [14.5–21.5]

*Median test-set R² over 1000 random 60/40 splits. Brackets = 90% CI on Mythos prediction across splits. Higher R² vs Epoch likely reflects updated data (more frontier models with METR scores) and regime restriction, not a methodological improvement. Epoch's approach is the foundational method; we apply it with newer data and test robustness. Also see EA Forum post applying weighted regression with 8 post-o3 models.

Results

Full dataset (n=19): linear and quadratic diverge on older models
Including pre-o1 models (GPT-4, Claude 3 Opus, etc.) widens the quadratic prediction. Compare to the post-o1 chart above.

Three regimes, two fits each.

RegimeFitnLOO Errorp50 Predictionp80 Prediction
All modelsLinear190.94622%15.9 hours2.8 hours
All modelsQuadratic190.95932%27.1 hours4.3 hours
Post-o1Linear140.9779%16.4 hours2.7 hours
Post-o1Quadratic140.97711%16.1 hours1.4 hours
Frontier IRT≥130Linear130.97110%16.8 hours2.8 hours
Frontier IRT≥130Quadratic130.97213%14.6 hours0.9 hours

LOO = Leave-one-out cross-validation median absolute percentage error. Post-o1 = models released after o1 (Dec 2024). Frontier = IRT ≥ 130.

Post-o1 regime (n=14): Linear and quadratic converge at ~16h with 9–11% LOO error. On the full dataset (n=19), they diverge (15.9h vs 27.1h) because pre-reasoning-era models weight the quadratic's curvature upward. The post-o1 subset has fewer free parameters relative to the data range and lower LOO error.

Beyond linear and quadratic

We also tested power law, sigmoid, cubic, and piecewise linear fits on the full n=19 dataset. The predictions cluster into two groups:

Fit TypeParamsLOOMythos p50
Linear (log-space)20.94622%15.9h
Power law20.90617%10.1h
Sigmoid (log-space)30.97026%11.8h
Piecewise linear (break=130)40.97218%16.8h
Quadratic (log-space)30.95932%27.1h
Cubic (log-space)40.97823%7.7h

Four of six fits predict 10–17 hours. The quadratic (27h) is pulled up by curvature from older models. The cubic (7.7h) overfits and curves back down. Sigmoid and piecewise fits, which allow the functional form to change, land at 12–17 hours.

Individual benchmarks → METR

As a robustness check, we regress each benchmark individually against METR p50 (univariate log-linear fits), then predict Mythos from its system card score. Of the 27 self-reported benchmarks with ≥4 METR models reporting, only 6 also have Mythos scores available — a data coverage limitation, not a methodological one.

We also attempted multivariate Ridge regression across all available benchmarks. With only 7 overlapping features and 19 data points, the model extrapolates unstably (Mythos scores exceed the training range on every benchmark). The univariate ensemble below is more robust.

BenchmarknLOOMythosData max→ p50Ref
BrowseComp40.96828%86.9%84.0%14.9hp. 191
SWE-bench Verified140.85640%93.9%80.9%11.3hp. 187
GPQA Diamond170.84633%94.5%92.4%5.4hp. 189
MMMLU90.83556%92.7%91.8%9.7hp. 189
HLE (no tools)60.71265%56.8%53.1%18.1hp. 191
Terminal-Bench 2.040.17373%82.0%77.3%8.7hp. 188

Univariate log-linear fits. Mythos scores clipped to 115% of data max to limit extrapolation. Sorted by R². Terminal-Bench grayed: R² = 0.17 (poor fit, only 4 models). All predictions involve extrapolation.

Median across 6 benchmarks: 10.5 hours. R² > 0.3 subset (5 benchmarks): median 11.3 hours. These are lower than the IRT-based ~16h and have higher LOO errors (28–65% vs 9–18%). Individual benchmarks are noisier predictors and the extrapolation is more severe (Mythos exceeds every benchmark's data max). IRT aggregation compresses this noise, which is why the IRT-based estimates have better LOO.


What the system card says

No METR score, but Anthropic reports internal autonomy evaluations and their own IRT trajectory.

Autonomy evaluations

Anthropic's internal suite tests AI R&D capabilities with hour-equivalent thresholds (System Card Table 2.3.3.A, p. 34):

TaskOpus 4.5Opus 4.6MythosThreshold
Kernel task (speedup)252×190×399×300× = 40h eq.
Time Series (MSE)5.715.804.55<5.3 = 40h eq.
LLM Training (speedup)16.5×34×51.9×>4× = 4–8h eq.
Quadruped RL (score)19.4820.9630.87>12 = 4h eq.
Novel Compiler (%)69.4%65.8%77.2%90% = 40h eq.

Source: Claude Mythos Preview System Card, Table 2.3.3.A, p. 34.

Note: These are task-specific hour-equivalents on narrow evaluations, not METR's measure of sustained autonomous work across diverse tasks. The two metrics are not directly comparable.

Anthropic's internal ECI and what it means for our prediction

For the first time, Anthropic published their own IRT-based capability tracking (System Card Section 2.3.6, pp. 40–42), using the same method as Epoch AI's public ECI but with ~300 models and "hundreds of benchmarks, mostly internal." Two findings are directly relevant:

Benchmark saturation at the frontier (Figure 2.3.6.A, p. 40). Most benchmarks in their IRT fit cluster below ECI ~175. Very few exist at Mythos's level (~190 on their internal scale). Anthropic states: "The ECI is only as good as the underlying dataset, and there are currently few benchmarks at Claude Mythos Preview's current capability level to tightly calibrate its ECI score." This applies equally to our prediction: if Mythos's IRT score of 186.6 has wide uncertainty due to benchmark scarcity at the frontier, then our METR extrapolation inherits that uncertainty.

Accelerating capability trajectory (Figure 2.3.6.B, p. 42). The Anthropic frontier from Claude 3 Opus (~118 internal ECI, Jan 2024) to Mythos (~190, Apr 2026) shows a two-phase linear fit with slope ratio 1.86×–4.3× depending on breakpoint. Mythos "appears to be above the pre-Mythos Preview trend, although its error bars are quite large." They caution: "we do not know if this trend will continue with future models."

Reconciling their ECI with our IRT: Anthropic's internal ECI of ~190 and our self-reported IRT of 186.6 are not directly comparable — different benchmark sets, different anchoring. But both place Mythos roughly 10–15 points above Opus 4.6 on their respective scales, which is the gap that drives our METR prediction. The relative gap matters more than the absolute number.

Three other details from this section that are easy to miss:


Qualitative assessment from the system card

Task-level performance and sustained autonomy diverge.

Anthropic's internal survey (n=18, p. 35): 1/18 thought Mythos was a drop-in for an entry-level Research Scientist. Their conclusion (p. 45): "Claude Mythos Preview does not seem close to being able to substitute for Research Scientists and Research Engineers, especially relatively senior ones."

Documented failure modes (pp. 35–39):

These failure modes — confabulation, grinding, factual errors — are the kinds of behaviors that degrade sustained autonomous performance. METR's time horizon measures exactly this: coherent multi-step work over extended periods. The gap between narrow task scores (40h equivalent) and qualitative assessment ("not close" to engineer replacement) is consistent with a p50 time horizon in the 10–20h range.


Summary

MethodEstimateSource
IRT regression, post-o1 (n=14)~16h (linear & quadratic converge)This analysis
IRT regression, all models (n=19)15.9–27.1h (linear–quadratic)This analysis
Alternative fits (power, sigmoid, piecewise)10–17h (4 of 6 fits)This analysis
Univariate benchmark ensemble (6 benchmarks)10.5h median (5.4–18.1h range)This analysis
Anthropic's internal task evals40h equiv. on 2/3 of tasksSystem Card p. 34
Anthropic's qualitative assessment"Not close" to engineer replacementSystem Card p. 45

Our regression-based estimates cluster around 10–17 hours, with the tightest fit (post-o1, R²=0.977) converging at ~16h. Anthropic's internal task evaluations are not directly comparable to METR, and their qualitative assessment is consistent with a model that can sustain autonomous work for hours but not reliably for days. METR's actual evaluation, when published, will determine accuracy.

Aside: Meta Muse Spark

Also released this week (April 8, 2026), Meta's Muse Spark sits at IRT ~168 — nearly identical to Opus 4.5 (167.6) and 19 points below Mythos (186.6). Our model predicts ~5 task-hours for Muse Spark, vs ~16 for Mythos.

On aggregate, Muse Spark is competitive: it beats Opus 4.6 on 10 of 19 benchmarks, with large wins on multimodal (CharXiv +21.1, ERQA +13.1), health (HealthBench Hard +28.0), and competitive coding (LiveCodeBench Pro +9.3). But it trails on the benchmarks most predictive of autonomous work:

Agentic benchmarkMuse SparkOpus 4.6Gemini 3.1GPT-5.4Mythos
SWE-bench Verified77.480.880.693.9
SWE-bench Pro52.453.454.257.777.8
Terminal-Bench 2.059.065.468.575.182.0
DeepSearchQA74.873.769.773.6
ARC AGI 242.563.376.576.1
ModelIRTPredicted METR p50Actual METR (if known)
Mythos Preview186.6~16hTBD
Opus 4.6177.0~9h*12h (actual)
Muse Spark (Thinking)167.7~5hTBD
Opus 4.5167.6~5h4.9h (actual)
Gemini 3 Pro166.8~5h3.7h (actual)

*Opus 4.6 actual METR is 12h; model underpredicts by ~25%. Muse Spark benchmarks from Meta's release (April 8, 2026). Frontier model benchmarks from respective system cards / announcements.

The IRT of 168 is an aggregate that averages Spark's multimodal strengths with its agentic weaknesses. For METR prediction, the agentic benchmarks matter most, and on those Spark trails Opus 4.6 by 3–6 points. At IRT 168, our model places it near Opus 4.5 (actual METR: 4.9h) and Gemini 3 Pro (actual: 3.7h) — both of which would validate a ~5h prediction. The 19-point IRT gap to Mythos (168 → 187) translates to a predicted 3× difference in autonomous task complexity.


Robustness checks

80% reliability threshold (p80)

p80 measures the human-task duration a model completes successfully 80% of the time — a stricter bar than the median (p50).

At the 80% reliability bar, Mythos predicted at ~2–3 task-hours
All 19 models. The gap between p50 (~16h) and p80 (~2h) reflects high variance in autonomous performance on complex tasks.

Individual benchmark predictions

As a robustness check, we regress each benchmark individually against METR (univariate log-linear fits). Only 6 of 27 benchmarks have both METR model coverage (≥4) and a Mythos score.

Six benchmarks predict 5–18 task-hours (median 10.5h)
Each benchmark regressed independently against METR p50. Dot size = R². Mythos scores from system card.

Individual benchmarks are noisier (LOO 28–65% vs 9–11% for IRT) and Mythos exceeds the data maximum on all six, making every prediction an extrapolation. IRT aggregation compresses this noise, which is why IRT-based estimates have lower cross-validation error.

Scaling behavior

IRT 130 (Claude 3.5 Sonnet) → IRT 177 (Opus 4.6): +47 points, task complexity grew from ~20 min to ~12 hours (36×).
IRT 177 (Opus 4.6) → IRT 187 (Mythos): +10 points, predicted ~12h → ~16h (1.3×).

On the full dataset, linear and quadratic diverge (15.9h vs 27.1h). On post-o1 only, they converge (both ~16h). The data does not clearly distinguish functional forms in this regime — more frontier model evaluations will resolve whether the relationship remains log-linear or accelerates.


Caveats

Extrapolation risk
Mythos at IRT 186.6 is ~10 points beyond Opus 4.6 at 177.0. Our regression has never been tested in this range. The confidence intervals widen accordingly.
Self-reported score inflation
Our prior SPAR research found labs over-report by ~1.13 pp on average (bootstrap 95% CI: [0.58, 1.74], p=0.0005, n=180 benchmark pairs). Mythos's IRT of 186.6 is derived from self-reported scores. If inflated, the true capability and METR prediction would be lower.
Benchmark saturation
Multiple benchmarks are near ceiling for Mythos (GPQA 94.5%, USAMO 97.6%, SWE-bench Verified 93.9%). Anthropic acknowledges: "The supply of benchmarks at the frontier is still a bottleneck" (System Card p. 40). Saturated benchmarks compress IRT differences and may understate the true capability gap.
Opus 4.6 autonomy outlier
At 718 minutes, Opus 4.6 dramatically outperforms GPT-5.2 (352 min) despite similar IRT. Anthropic may have specifically optimized for autonomous task completion. Our regression, anchored partly by Opus 4.6, may overweight Anthropic-specific gains.
Reward hacking
The system card reports novel reward hacking: Mythos moved computation outside timing calls and found test sets used by graders (p. 35). If similar behaviors emerge in METR evaluations, the measured time horizon could be artificially inflated.
80th percentile data sparsity
While all 19 models now have matched p80 data, the 80th percentile threshold is much harder to reach and the absolute values are small (many under 5 minutes), amplifying noise. The ~2.7 hour post-o1 prediction should be treated as directional.
IRT fallback for Claude 3.5 Sonnet June '24
Claude 3.5 Sonnet June '24 uses third-party IRT (127.0) as fallback since no self-reported IRT was available — the only model requiring this fallback. This introduces a small inconsistency in the otherwise self-reported IRT dataset.

Full methodology
Regression model: log(METR minutes) = a + b·IRT + c·IRT² via numpy.polyfit, degree 1 and 2, fit in log-space.

Data: 19 models from METR-Horizon-v1.1 (Epoch AI), matched to self-reported IRT scores from SPAR master dataset. All 19 have both p50 and p80 METR scores. Three regimes tested: all models (n=19), post-o1 (n=14), and frontier IRT≥130 (n=13).

IRT computation: 2-parameter logistic IRT via scipy.optimize.least_squares, MMLU-Pro anchor, L2 regularization (0.1), minimum 3 benchmarks per model. Self-reported IRT preferred; third-party IRT used as fallback for Claude 3.5 Sonnet June '24 only (127.0).

Validation: Leave-one-out cross-validation. Median absolute percentage error ranges from 9% (post-o1 linear) to 32% (all models quadratic). 500 bootstrap resamples for 90% confidence bands.

Mythos IRT score: 186.6 (self-reported, from SPAR master dataset as of April 7, 2026).

System card: Claude Mythos Preview System Card, April 7, 2026. 243 pages. Page numbers cited throughout.
Appendix: all 19 matched models (raw data)

Every model used in the regression, sorted by IRT score. IRT source: SR = self-reported, 3P = third-party fallback. METR CIs from METR-Horizon-v1.1. All times in minutes unless labeled hours.

ModelIRTSrcp50 (min)p50p50 CIp80 (min)p80Release
Claude Opus 4.6177.0SR718.812.0h[319, 3950]69.91.2h2026-02-05
GPT-5.3 Codex173.8SR349.55.8h[192, 858]54.70.9h2026-02-05
GPT-5.2169.7SR352.25.9h[191, 862]66.01.1h2025-12-11
Claude Opus 4.5 (R)167.6SR293.04.9h[161, 639]49.40.8h2025-11-24
Gemini 3 Pro166.8SR224.33.7h[137, 387]54.10.9h2025-11-18
GPT-5.1163.4SR223.73.7h[135, 395]50.60.8h2025-11-19
GPT-5162.2SR203.03.4h[114, 407]38.30.6h2025-08-07
Claude 4.1 Opus150.8SR100.51.7h[60, 158]23.50.4h2025-08-05
o3150.0SR119.72.0h[73, 192]30.00.5h2025-04-16
Claude 4 Opus (R)149.9SR100.41.7h[60, 163]20.40.3h2025-05-22
Claude 3.7 Sonnet140.1SR60.41.0h[33, 107]12.10.2h2025-02-24
o1134.6SR38.839m[22, 67]7.17m2024-12-05
Claude 3.5 Sonnet (Oct)130.0SR20.521m[10, 40]2.63m2024-10-22
Claude 3.5 Sonnet (Jun)127.03P11.411m[5, 23]1.72m2024-06-20
o1-preview123.8SR20.320m[12, 33]4.44m2024-09-12
GPT-4o117.1SR7.07m[4, 12]1.31m2024-05-13
Claude 3 Opus112.3SR4.04m[2, 9]0.6<1m2024-03-04
GPT-4 Turbo109.2SR4.04m[2, 8]0.8<1m2023-11-06
GPT-484.8SR4.04m[2, 8]0.9<1m2023-03-14

Data: METR-Horizon-v1.1 YAML matched to SPAR master dataset IRT scores. CSV available at final_matched_metr_irt.csv.

Appendix: benchmark scores underlying the IRT scores

The IRT score for each model is computed from the self-reported benchmark scores below (plus others not shown — models have 5–43 total benchmarks each). IRT tolerates this sparsity by design, but the matrix is sparse: no single benchmark covers all 19 models. GPQA (17/19) has the best coverage. Mythos row shown for reference.

ModelIRTGPQASWE-VAIME25MMMLUMATHMMLUHLEARC-AGI2T-Bench#SR
Mythos Preview186.694.693.992.756.882.010
Opus 4.6177.091.380.899.891.153.168.865.426
GPT-5.3 Codex173.877.35
GPT-5.2169.792.480.0100.089.634.552.923
Opus 4.5 (R)167.687.080.990.837.659.310
Gemini 3 Pro166.891.976.2100.091.845.831.154.218
GPT-5.1163.488.176.394.09
GPT-5162.285.774.994.684.792.524.835
Opus 4.1150.880.974.578.089.58
o3150.083.369.186.414.76.522
Opus 4 (R)149.979.672.575.588.88.610
Son 3.7140.184.870.354.886.111
o1134.678.041.087.796.491.819
Son 3.5 (Oct)130.067.249.078.390.419
Son 3.5 (Jun)127.0*0
o1-preview123.873.341.385.590.88
GPT-4o117.170.133.281.476.685.75.343
Opus 3112.350.460.186.811
GPT-4 Turbo109.248.072.686.56
GPT-484.835.742.086.412

*Claude 3.5 Sonnet (Jun) uses third-party IRT (127.0) as fallback — no self-reported benchmarks available. #SR = total self-reported benchmarks feeding the IRT computation (including those not shown). SWE-V = SWE-bench Verified. T-Bench = Terminal-Bench 2.0. Dashes = not reported by the model's developer. IRT is computed from ALL available benchmarks per model, not just these 9 columns.