NEWSFERENCE
FRI, 15 May 2026 20:32:23
LIVE
$ today --liveF1TodayF2YesterdayF3ArchiveF4About
NEXT SCAN
← BACK TO TODAY/CLUSTER · NOUS · RESEARCH
CLUSTER · TIER 2
FIRST SEEN 2D AGO
NOUSRESEARCH

Nous Research releases Lighthouse Attention for faster long-context pre-training

Nous Research has open-sourced Lighthouse Attention, a selection-based hierarchical attention mechanism for long-context pre-training that delivers a 1.4–1.7× wall-clock speedup at 98K context and runs ~17× faster than standard attention at 512K context on a single B200. The approach uses a multi-resolution pyramid with top-k cascade selection and requires no custom sparse attention kernel, straight-through estimator, or auxiliary loss.

Sources
1
X mentions
84k
First seen
2Dago
Velocity
+2%/6h
CONTRIBUTING SOURCES
1 ARTICLES
  1. X (Twitter)2D AGO
    x.com/NousResearch/status/2055337939270332862
X DISCOURSE
84k TOTAL · TOP 3
@NousResearch3H · 122.5K
RT @akshay_pachaar: https://t.co/Exoyd8tB0d
@NousResearch7H · 115.0K
RT @akshay_pachaar: https://t.co/Exoyd8tB0d
@NousResearch1D · 85.2K
RT @akshay_pachaar: https://t.co/Exoyd8tB0d