[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Tomer Galanti posts on X about loops, succinct the most. They currently have XXXXX followers and XX posts still getting attention that total XX engagements in the last XX hours.
Social category influence currencies
Social topic influence loops, succinct
Top posts by engagements in the last XX hours
"@aryehazan Thought you might like it :) We implement MDL ERM with LLMs to enjoy the favorable sample complexity of naive program enumeration and avoid its heavy computational cost. We use it to learn cool functions like IsPrime and 10-Parity with XXX samples"
X Link @GalantiTomer 2025-10-17T01:35Z 1377 followers, XXX engagements
"@aryehazan We also rely on some classic results in SQ learning to justify why online SGD training on such tasks requires tons of data. Really enjoyed @lreyzin's survey it helped a lot😊"
X Link @GalantiTomer 2025-10-17T01:44Z 1377 followers, XX engagements
"🤔 Can a neural network learn to distinguish primes from composites Not so simple. Large transformers can easily memorize 100k samples and fail to learn the underlying rule. So how can learning recover that program"
X Link @GalantiTomer 2025-10-17T03:02Z 1377 followers, XXX engagements
"âš™ Our idea (LLM-ERM): A propose & verify algorithm: X Use an LLM to propose short Python candidate programs. X Verify each on the train/validation data. X Pick the best verified one no gradients no fine-tuning no feedback loops"
X Link @GalantiTomer 2025-10-17T03:02Z 1377 followers, XXX engagements
"🧩 Interpretability: Both the learned function and the learning process are interpretable Each LLM-ERM run returns human-readable code and its reasoning trace; a step-by-step audit of how the model arrived at the solution (e.g. discovering MillerRabin for primality)"
X Link @GalantiTomer 2025-10-17T03:02Z 1376 followers, XXX engagements
"📚 Theory: We use classic results in SQ and show that coordinate SGD needs many samples to learn even simple short-program families (like parity). On the flip side exhaustive program enumeration requires much less data but runs for ever"
X Link @GalantiTomer 2025-10-17T03:02Z 1376 followers, XX engagements
"In-context learning fails too. We gave 30B-parameter instruction-tuned models (e.g. Qwen3-30B-A3B) the full XXX training examples in the prompt. Test accuracy stayed near random (50%). Even reasoning LLMs couldnt infer the underlying algorithm from examples alone"
X Link @GalantiTomer 2025-10-17T03:02Z 1377 followers, XXX engagements
"💡 Key takeaway: LLM-guided propose & verify restores ERMs sample efficiency while staying tractable yielding succinct auditable programs that standard SGD fails to learn"
X Link @GalantiTomer 2025-10-17T03:02Z 1377 followers, XXX engagements