Polymarket Weather Trading: What the Leaderboard Data Actually Shows (2026)
Twenty traders cleared +$10,000 in Polymarket’s weather category in March 2026. Handsanitizer23 turned an $18,734 bet into $64,600 on a single Atlanta temperature bracket. The screenshots are real — and the profits are verifiable on the public leaderboard.
But the bigger story is not that weather traders are winning. It’s that weather may be the rare prediction market where the inputs are free, the settlement is objective, and the market is still thin enough for pricing mistakes to survive.
The public data supports that thesis — but much less cleanly than the viral threads imply.
Bottom Line
This may be one of the last corners of prediction markets where free public data still has a chance to beat crowd intuition.
Key Takeaways
- ✓20+ traders cleared $10K+ in Polymarket weather in March 2026 — verifiable on the public leaderboard
- ✓The structural case: objective resolution, free forecast models, thin liquidity — but public data cannot prove the source of profits
- ✓The edge may already be compressing: volume has quadrupled since launch, new 1.25% weather fees hit March 30
What the Public Leaderboard Shows
The March 2026 monthly weather leaderboard on Polymarket (pulled March 27, 2026) tells a clear but incomplete story.
Top 5 Traders — Weather Category, March 2026
| Rank | Trader | Profit | Volume |
|---|---|---|---|
| 1 | Handsanitizer23 | +$74,062 | $838,416 |
| 2 | ColdMath | +$34,842 | $3,373,789 |
| 3 | Shoemaker34 | +$27,721 | $481,714 |
| 4 | Junhoo2 | +$27,555 | $312,456 |
| 5 | HondaCivic | +$26,562 | $2,418,927 |
Source: Polymarket monthly weather leaderboard, pulled March 27, 2026. Twenty traders exceeded +$10,000 in the weather category this month, with total leaderboard volume ranging from $35,000 to over $3.3 million per account.
The biggest single win this month: Handsanitizer23’s position in “Highest temperature in Atlanta on March 17?” — a $18,734 entry that resolved for $64,600.
But three things the leaderboard does not show are at least as important as what it does.
The category is mixed
This is the “weather” category leaderboard — not a “daily temperature” leaderboard. Polymarket’s weather category aggregates daily high temperature brackets with broader climate markets (seasonal temperature records, hottest-month contests, extreme weather events). Profits attributed to “weather trading” may partially derive from structurally different market types.
Survivorship bias is inherent
The leaderboard shows winners. It does not show the broader population of participants who traded weather markets this month and lost. The total number of weather participants is not publicly disclosed, but survivorship bias is inherent in any profit leaderboard.
Public data shows outcomes, not process
Public profiles reveal prediction counts and total volume, but not starting bankrolls, trade-by-trade P&L timelines, or strategy logic. When a profile shows +$74,062 on $838,416 volume, we know the profit-to-volume ratio (~8.8%), but not whether this was a $500 account that compounded aggressively or a $100,000 account grinding modest returns. The “small account to six figures” narrative is a story, not a documented fact from the data available.
Why Weather Markets Are Structurally Different
Three properties make Polymarket’s daily temperature markets more amenable to model-based analysis than politics, crypto, or culture markets. In short: weather may be the rare prediction market where the inputs are free, the settlement is objective, and the edge survives only because most participants still don’t bother to do the math. These are structural observations about the category, not claims about any specific trader’s edge.
Weather may be the rare prediction market where the inputs are free, the settlement is objective, and the edge survives only because most participants still don’t bother to do the math.
Objective resolution with minimal interpretation. Temperature markets resolve against a specific weather station reading — typically an airport station sourced from Weather Underground, with defined rounding rules specified in each market’s resolution criteria. The outcome is a number from a public sensor, not an editorial judgment, a vote count subject to dispute, or a price determined by another market. This makes temperature outcomes far less interpretive than political or cultural markets, though edge cases exist: station sensor failures, Weather Underground display delays, and ambiguity in rounding rules add operational risk that doesn’t appear in the leaderboard numbers.
Free, calibrated forecast models with measurable error. The Global Forecast System (GFS) updates every six hours. The European Centre for Medium-Range Weather Forecasts (ECMWF) runs twice daily. The National Weather Service issues point forecasts for specific stations across the United States. These models have measurable error characteristics that vary by city, season, and lead time — and those error characteristics are publicly documented in NWS verification reports.
This means a calibrated probability distribution across temperature brackets is constructible, not theoretical. For a given city, date, and forecast lead time, it is possible to estimate the probability of the high temperature landing in each 2°F bracket based on the forecast point and the historical error distribution for that location. Whether most Polymarket participants actually perform this calibration is a separate question — and one that public data cannot answer.
Thin liquidity amplifies potential mispricings. Individual daily temperature markets on Polymarket vary widely in volume — from under $40,000 for smaller international cities to over $400,000 for active U.S. cities on high-attention days. In thinner markets, a few uninformed trades can push prices meaningfully away from model-implied fair values. This is a structural observation about market microstructure, not a claim about participant sophistication. As volume grows and more systematic participants enter, this property weakens.
A Worked Example: Houston, March 26
On March 26, 2026, two weather bracket trades on Houston illustrate how the same city, the same day, and the same analytical approach can produce both a win and a loss. The market prices and outcomes below are from public, resolved markets. The probability estimates are illustrative — based on a hypothetical NWS forecast of 84°F for KIAH (Houston Intercontinental) with a city-specific historical forecast error σ of approximately 4.5°F for late-March Houston.
Bracket: ≥ 85.5°F
Market: YES at 28¢
Model: ~37% probability
Outcome: NO won ✓
Bracket: ≥ 83.5°F
Market: YES at 61¢
Model: ~54% probability
Outcome: NO lost ✗
Bracket: “Will the high be 85.5°F or above?” The market priced YES at 28¢ — implying a 28% chance of reaching that threshold. The illustrative distribution places approximately 37% probability above 85.5°F. The model favored YES at 28¢, not NO at 72¢. The NO side ultimately won, but the model-implied edge was not clearly on that side.
Bracket: “Will the high be 83.5°F or above?” The market priced YES at 61¢ — implying a 61% chance of reaching 83.5°F. The illustrative distribution places approximately 54% probability above 83.5°F — meaning the market was overpricing YES by about 7 percentage points, making NO at 39¢ modestly +EV. But Houston’s actual high exceeded 83.5°F. The NO position lost.
One trade won. One trade lost. Same city, same day. The bracket closer to the forecast point (83.5°F, only half a degree below the 84°F forecast) carried far more uncertainty than the tail bracket (85.5°F). This is the reality that leaderboard screenshots don’t capture: even when the structural math favors a trade, individual outcomes are determined by where the temperature actually lands relative to a probabilistic distribution — not by whether the model was “right.”
Even when the structural math favors a trade, individual outcomes are determined by where the temperature actually lands — not by whether the model was “right.”
The broader pattern is worth noting. Tail brackets — those furthest from the forecast point — tend to offer the widest gaps between model-implied and market-implied probabilities. But they also resolve against you less often, creating a payoff shape where small, frequent gains are punctuated by occasional larger losses. This is consistent with the longshot bias documented extensively in sports betting and horse racing: low-probability outcomes are persistently overpriced, and the traders who profit from this pattern do so across hundreds of trades, not on any single bracket.
What Leaderboard Profiles Suggest (But Cannot Prove) About Strategy
Public Polymarket profiles display prediction counts, total volume, biggest wins, and category rankings. From these observables, we can identify patterns. We cannot determine causation, method, or intent.
High-volume, high-count profiles.
ColdMath traded $3.37 million in weather volume this month. HondaCivic traded $2.42 million across 2,776 predictions. These profiles are consistent with a systematic or semi-systematic approach — frequent participation across many markets. Public data cannot distinguish whether these traders run calibrated forecast models, execute market-making strategies, copy other wallets, or employ some combination of approaches.
Low-count, high-profit profiles.
Handsanitizer23 posted +$74,062 in profit on just 34 predictions this month. That implies an average profit per resolved prediction of approximately $2,180 — suggesting infrequent, high-conviction, large-sized positions. However, “34 predictions” may not correspond to 34 independent market entries — Polymarket’s prediction count methodology is not fully transparent, and a single market can involve multiple bracket positions.
What public data cannot distinguish.
Whether any specific trader uses GFS or ECMWF forecasts, real-time station observations, enters positions near resolution time when uncertainty has narrowed, hedges adjacent brackets, or employs any particular strategy. The leaderboard shows who profited. It does not show how.
The Structural Math: Why Category Properties Matter More Than Any Single Strategy
The structural math rarely discussed elsewhere is this: it applies to the category as a whole, not to any individual trader.
Forecast error is measurable and city-specific. NWS forecast error for daily high temperature is well-studied through official verification reports. City-specific standard deviations typically range from approximately 2.5°F for tropical and coastal cities (where marine influence stabilizes temperatures) to 7°F or more for continental cities during transitional seasons (where frontal passages create large forecast uncertainty). Summer forecasts are generally tighter than winter; 24-hour forecasts are tighter than 48-hour forecasts. These are not estimates — they are published statistics derived from decades of verification data.
However, these error distributions are not perfectly normal, and they exhibit known regional biases. NWS verification reports document warm biases in certain U.S. regions during frontal passages and seasonal transitions. A model that assumes symmetric, normally distributed errors without accounting for these biases will systematically misprice certain brackets — potentially in the wrong direction. Backtesting only winning periods, or only high-volume cities where outcomes happened to favor the model, overstates the edge.
Model-implied fair probability versus market price. When a temperature bracket is priced at 20¢ on Polymarket, the market implies a 20% probability that the temperature will land in that range. If a calibrated forecast distribution — built from the NWS point forecast and the city-specific error distribution for that lead time — estimates the model-implied fair probability at 12%, the bracket appears overpriced by 8 percentage points.
This gap is the same phenomenon documented in horse racing and sports betting as “longshot bias” — the persistent tendency for low-probability outcomes to be overpriced relative to their true frequency. In sports betting, the explanation involves psychological preference for large potential payoffs on small stakes. In prediction markets, the mechanism may be similar: cheap contracts (5¢–20¢) attract disproportionate buying interest regardless of whether the implied probability is calibrated.
Variance and compounding: the short-odds framework. A trader systematically selling NO on overpriced tail brackets at 85¢–95¢ faces a concave payoff structure — small gains on most trades, occasional large losses. But research from sports betting mathematician Joseph Buchdahl and others demonstrates that this payoff shape, despite looking fragile on any single trade, maximizes expected bankroll growth when the edge is genuine.
The key metric is Maximum Expected Growth (MEG), approximated as edge² / (2 × fractional odds). At short odds (near 1.05–1.15 in decimal), MEG per unit of risk is dramatically higher than at long odds (5.0–20.0) with the same percentage edge. Monte Carlo simulations published by the Smart Betting Club show that over 858 bets at short odds with genuine edge, zero out of 100 simulated bettors lost money — while 38% of simulated longshot bettors with identical edge went negative over the same period.
This framework explains why some weather traders maintain near-perfect win rates while others swing wildly: it’s not (only) about forecast quality. It’s about where on the probability curve you choose to trade. The weather edge, if it exists, is probably less about predicting the exact temperature and more about repeatedly finding tail brackets the market prices badly — then letting the short-odds math compound over hundreds of trades.
What Public Data Cannot Tell You
We don’t know who’s losing.
The leaderboard shows winners. It does not reveal the broader population of weather participants who lost, or by how much. Survivorship bias is inherent in any profit leaderboard.
We don’t know the source of profits.
A profitable weather trader might be running calibrated forecast models, exploiting execution timing near resolution, market-making for spread capture, copying wallets, or simply experiencing favorable variance over a small sample. Public profiles expose none of this.
We don’t know starting bankrolls, and the category is mixed.
“+$74,062 profit” on $838K volume tells us the ratio, not whether this was a $500 account or a $100K account. And the “weather” leaderboard aggregates daily temperature brackets with broader climate markets — seasonal records, disaster predictions, hottest-month contests. A trader’s weather P&L may partially reflect structurally different markets.
The same data is available to everyone.
GFS and ECMWF forecasts are free. Any durable edge likely comes not from “having the model” but from city-specific calibration, station-rule expertise, lead-time optimization, and execution discipline — harder to replicate but also harder to verify from the outside.
What Could Compress or Eliminate the Edge
More systematic participants are already present.
The monthly leaderboard shows accounts like ColdMath ($3.37M volume) and multiple wallet-address-only accounts trading at scale. Since weather markets launched in late 2025, volume has roughly quadrupled while profit-to-volume ratios on the monthly leaderboard have compressed — suggesting that whatever edge exists is being competed away in real time. As more systematic participants enter, the gap between market price and model-implied fair probability should continue to narrow.
Polymarket’s new weather fees add friction.
Effective March 30, 2026, Polymarket’s expanded fee structure includes weather markets at a peak taker fee rate of 1.25%. The V2 fee formula — feeRate × p × (1 − p) — means the effective fee rate is highest near 50% probability and declines toward the extremes. On a NO contract at 90¢ (a typical entry point for tail-bracket sellers), the effective fee is approximately 0.75%. These fees apply only to weather markets deployed on or after the March 30 activation date; pre-existing markets are unaffected. For traders grinding edges of 3–5%, a 0.75–1.25% fee layer turns some positive-expectation trades into breakeven or negative-expectation trades.
Model convergence erodes informational advantage.
If the edge comes primarily from applying freely available GFS and ECMWF forecast data against retail-driven market prices, it erodes as more participants apply the same inputs. The remaining edge — if any — would come from city-specific calibration refinements, station-rule expertise, and execution timing. These are harder to scale and harder to verify.
Execution reality in thin markets.
In a daily temperature market with $50,000–$100,000 in total volume, a single $5,000 order can move the price meaningfully before fully filling. The edge that exists on paper may not survive execution at size. This helps explain why some leaderboard traders show high volume but modest profit-to-volume ratios — the market absorbs their theoretical edge through slippage.
Resolution edge cases.
Weather station sensor failures, data source display delays, and ambiguity in rounding rules create operational risks that do not appear in leaderboard numbers. These events are rare but real, and a single resolution dispute on a large position can erase weeks of grinding.
The edge may not be forecasting.
Even if weather markets are more modelable than politics, the durable edge may come less from broad forecast-model application and more from late execution near resolution (when station observations narrow the range), station-rule fluency, and market microstructure. A “weather strategy” built purely on GFS/ECMWF probability distributions may not scale as well as the structural case implies — because execution timing and bracket-specific liquidity dynamics matter at least as much as the forecast itself.
Brackets are not independent bets.
Adjacent temperature brackets are mechanically linked — if one bracket’s probability rises, neighboring brackets must adjust. Treating each bracket as an isolated binary can make variance and edge calculations look cleaner than the actual portfolio-level trading reality.
For Sports Bettors Moving Into Prediction Markets
If you already evaluate player prop markets — “Will LeBron score over 27.5 points?” — you already have the framework for daily temperature brackets. The structure is identical: a continuous variable discretized into brackets with defined resolution sources. Estimate the probability distribution, compare it to the market-implied probability, calculate expected value after fees.
The key difference: weather forecast models are free, public, and update on fixed schedules. Sports injury and lineup information is gated and fast-moving. Weather markets move on a slower timescale — hours, not minutes.
Polymarket weather markets carry up to 1.25% peak taker fees after March 30. Kalshi offers daily temperature markets through its regulated U.S. exchange at its standard formula-based fee. For a detailed comparison, see the ChanceMetrics Prediction Market Fees Compared calculator.
Tools for Evaluating Weather Bracket Trades
These tools help estimate modeled expected value under your own assumptions — not predict outcomes.
Expected Value Calculator →
Enter a market price and your probability estimate to see fee-adjusted EV under those assumptions.
Prediction Market Fees Compared →
Compare exact per-contract costs across Kalshi and Polymarket. Select “Weather” to see the new fee curve.
Kelly Criterion Calculator →
Calculate a theoretical Kelly benchmark for position sizing. The math framework, not a prescription.
Ready to explore a live U.S. prediction market platform?
Open a Kalshi account →ChanceMetrics may earn a referral commission if you sign up through this link. This does not affect our editorial content, calculator methodology, or how we evaluate platforms. Trading involves risk of loss.
Conclusion
Weather markets are among the most structurally analyzable categories on any prediction platform — objective data, free models, thin liquidity. The March 2026 leaderboard is consistent with systematic edge. But consistency is not proof, and the clock is ticking: volume is up fourfold since launch, fees are expanding March 30, and automated accounts are already trading at scale.
If the profits persist after fees expand and more systematic traders arrive, that will say more than any viral P&L screenshot ever could.
The leaderboard does not prove weather is easy. If that edge remains real after fees rise and systematic traders pile in, that will matter more than any viral P&L screenshot.
Keep reading
- Kalshi vs Polymarket — full platform comparison including fees, regulation, and market selection
- Prediction Market Fees Compared — see how Polymarket’s new weather fees compare to Kalshi
- EV Calculator — evaluate whether a specific bracket trade is mathematically favorable
- What Is Kalshi? — how the CFTC-regulated exchange works
- Learn — step-by-step guide to placing your first prediction market trade
Frequently Asked Questions
Common questions about Polymarket weather trading, fees, and how temperature bracket markets work.
Is Polymarket weather trading profitable?
The March 2026 monthly weather leaderboard shows 20+ traders with profits exceeding $10,000. However, the leaderboard only shows winners — not the full distribution of participants. Survivorship bias is inherent in any profit leaderboard, and public profiles do not reveal starting bankrolls, strategy logic, or trade-by-trade P&L.
How do Polymarket daily temperature markets work?
Polymarket daily temperature markets ask ‘What will the highest temperature be in [city] on [date]?’ The outcome is divided into 2°F brackets. Each bracket trades as a YES/NO contract priced from $0.01 to $0.99, settling at $1 if the observed temperature lands in that range. Resolution is based on a specific weather station reading, typically from Weather Underground, with defined rounding rules.
What is the structural edge in weather markets?
Weather markets have three properties that make them more amenable to model-based analysis than politics or crypto: objective resolution against public weather stations, free calibrated forecast models (GFS, ECMWF, NWS) with measurable error distributions, and thin liquidity that can allow pricing mistakes to persist. Whether this constitutes a tradeable edge depends on execution, fees, and competition.
What are Polymarket weather trading fees?
Effective March 30, 2026, Polymarket charges category-based taker fees on weather markets with a peak effective rate of 1.25% near 50% probability. The effective rate declines toward price extremes. Fees apply only to weather markets deployed on or after the activation date. Pre-existing markets are unaffected.
Does Kalshi have weather markets?
Yes. Kalshi offers daily high temperature markets for U.S. cities through its CFTC-regulated exchange. Kalshi uses a formula-based fee — 0.07 × contracts × price × (1 − price) — which differs from Polymarket’s category-based structure. For a detailed comparison, see the ChanceMetrics Prediction Market Fees Compared calculator.
What is longshot bias in prediction markets?
Longshot bias is the persistent tendency for low-probability outcomes to be overpriced relative to their true frequency. In weather markets, this means cheap temperature brackets (priced at 5¢–20¢) may attract disproportionate buying interest, creating opportunities for traders who can estimate calibrated probabilities from forecast models.
Educational content only. This article is for informational purposes and does not constitute financial, legal, or tax advice. Prediction market trading carries significant risk. Past results, fee estimates, and legal summaries may not reflect current conditions. Always consult a qualified professional before making financial decisions.