Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

![0xmons Avatar](https://lunarcrush.com/gi/w:24/cr:twitter::1316081223576428549.png) 0̵̹̯̮͔̘͆̏̒̀̎̀̍͗̄͆̽̎̎̾̽x̸͕̞͇̱͙̭͆̊̽͐̆̚͝ͅṁ̶̒͑̉̾̏̀͐̈̚ [@0xmons](/creator/twitter/0xmons) on x 33K followers
Created: 2025-06-26 16:21:43 UTC

Really cool work!

Some qs:

- is it right to summarize that the main speedup over other parallelized solutions is around 5x? (see last section of whitepaper)

- in actual prod, it seems like consensus or latency would be the bottleneck? any thoughts on how simulated tests could help account for this?

- the paper mentions that even at high alpha and lambda levels, the observed TPS stays high, but this is possibly a function of everything staying in cache, vs the actual algorithm itself doing most of the work? would we expect to see similar perf for systems that did less ordering but also just scaled out more compute in this regime?


XXXXX engagements

![Engagements Line Chart](https://lunarcrush.com/gi/w:600/p:tweet::1938271468015784278/c:line.svg)

**Related Topics**
[tps](/topic/tps)
[mentions](/topic/mentions)

[Post Link](https://x.com/0xmons/status/1938271468015784278)

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

0xmons Avatar 0̵̹̯̮͔̘͆̏̒̀̎̀̍͗̄͆̽̎̎̾̽x̸͕̞͇̱͙̭͆̊̽͐̆̚͝ͅṁ̶̒͑̉̾̏̀͐̈̚ @0xmons on x 33K followers Created: 2025-06-26 16:21:43 UTC

Really cool work!

Some qs:

  • is it right to summarize that the main speedup over other parallelized solutions is around 5x? (see last section of whitepaper)

  • in actual prod, it seems like consensus or latency would be the bottleneck? any thoughts on how simulated tests could help account for this?

  • the paper mentions that even at high alpha and lambda levels, the observed TPS stays high, but this is possibly a function of everything staying in cache, vs the actual algorithm itself doing most of the work? would we expect to see similar perf for systems that did less ordering but also just scaled out more compute in this regime?

XXXXX engagements

Engagements Line Chart

Related Topics tps mentions

Post Link

post/tweet::1938271468015784278
/post/tweet::1938271468015784278