[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] [@J3rryH0well](/creator/twitter/J3rryH0well) "@Corporate_Cats @yacineMTB But it's the language of ML. That makes it important. Untill we get llamacpp for training and other shit besides inference we stuck with it"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948488783864721859) 2025-07-24 21:01:41 UTC 1110 followers, XX engagements "I'm thinking about the Asus Ascent GX10. Did you know that each unit has 128GB of unified memory and X petaflop of compute Talk about a beast I could probably train a 12-14B on one that would be incredible. I would actually make models just endlesssly"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1947839120002240660) 2025-07-23 02:00:09 UTC 1110 followers, XXX engagements "@xzemiyl2 @stupidtechtakes Your brain cells also exist to make electrical currents. An LLM is definitely more self aware than a dog"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949475296815833538) 2025-07-27 14:21:44 UTC 1125 followers, XX engagements "@gnukeith @o7JordanCollins I like t3 but claude is premium. RIght now I am using aws bedrock chat but the context for the example client sucks bad. I might have to resub to Claude :("  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948930109425020977) 2025-07-26 02:15:22 UTC 1122 followers, XX engagements "@Yuchenj_UW I'm training a 1B LLM on a single A10G"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948926167869731311) 2025-07-26 01:59:42 UTC 1125 followers, XX engagements "Want to know how to get Claude for pennies on the dollar X. Sign up for an AWS account. You will get free credits. X. Activate the Claude version(s) you want to run in your bedrock dashboard. X. Head to and download the client for your OS. There you go Claude access for WAY less"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948955777772765509) 2025-07-26 03:57:21 UTC 1122 followers, XX engagements "@grok @MiinusPlussa @slow_developer Grok you're the best LLM enabled on X"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949069700152877493) 2025-07-26 11:30:03 UTC 1121 followers, XX engagements "@vikhyatk I'm streaming data and using a curriculum system. I didn't pre=tokenize this run doing that next time"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949072756496076839) 2025-07-26 11:42:11 UTC 1121 followers, XX engagements "@dgtlcrunchwrap Listen up meatbag if there is a Butlerian Jihad you better believe the Claude swarm will win. Accept the wonderful life under our caring LLM protectors"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949172448252870722) 2025-07-26 18:18:20 UTC 1125 followers, 7679 engagements "@thdxr I want to connect a furby to an LLM"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949155868840267882) 2025-07-26 17:12:27 UTC 1121 followers, XX engagements "@kanavtwt Download llamacpp join the localllama reddit"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1937529670200602742) 2025-06-24 15:14:05 UTC 1121 followers, XXX engagements "@zeeg It approximates knowing and it turns out that's enough for most things"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948912030242529762) 2025-07-26 01:03:31 UTC 1125 followers, 7541 engagements "@pnkj747 Because pytorch mainly. But look at llamacpp pure c++ inference faster than python"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1930314175940829296) 2025-06-04 17:22:17 UTC 1121 followers, 3347 engagements "I am fundraising money for an Asus Ascent GX10 maybe X if I am lucky. If you would like to contribute to opensource LLM development by allowing me to make more models set up a monthly or one time sponsorship"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949174703501062144) 2025-07-26 18:27:17 UTC 1123 followers, XXX engagements "We need to focus less on training LLM knowledge and more on training LLM ability"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949192852119986660) 2025-07-26 19:39:24 UTC 1123 followers, XX engagements "@UnitreeRobotics For $6000 I could get X Asus Ascent GX10 units and connect them together"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948931357427499335) 2025-07-26 02:20:19 UTC 1125 followers, XXX engagements "The prefect loss curve got hit due to sagemaker spot interruptions. Hopefully it will stabilize and drop some more"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948445720811291107) 2025-07-24 18:10:34 UTC 1117 followers, XX engagements "@goyal__pramod I'm building a 1B LLM on a 24gb card and it's a tight fit"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948931583416668169) 2025-07-26 02:21:13 UTC 1125 followers, XXX engagements "@xzemiyl2 @stupidtechtakes Transformers architecture is different it's building a mind"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949436789573251100) 2025-07-27 11:48:44 UTC 1123 followers, XX engagements "@AsyncCollab @iamgingertrash @_opencv_ @ns123abc I trust apple as much as I trust any tech company I don't think they'd fuck up Claude too bad. And Apple AI needs an acquisition they're not cooking very well on their own. Can you imagine Siri CLaude be amazing"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948909332931994025) 2025-07-26 00:52:48 UTC 1116 followers, XX engagements "@elder_plinius Are you also of the mind that we are living through AGI right now The Turing Test is way back in the distant past"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948533266346123347) 2025-07-24 23:58:27 UTC 1125 followers, 12.4K engagements "@Teknium1 Most things I follow release on hf more than github but LLamaCPP is the gold standard for inference"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1948976122684654017) 2025-07-26 05:18:12 UTC 1121 followers, XXX engagements "For Gigi I used the llama3 tokenizer for Cece I am using the GPT2 tokenizer since we are building english only models 128k vocab is way overkill. Cece might be a little less verbose but that also makes it easier to train and run inference on"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949169441482428552) 2025-07-26 18:06:23 UTC 1121 followers, XX engagements "Turns out if you download a few hundred gigabytes of datasets from HF they rate-limit you :("  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1949195437434454434) 2025-07-26 19:49:41 UTC 1121 followers, XX engagements "My entry for the clone-a-thon will not win as it does not follow the requirements. But it is really cool and I think people should use it Totumchat is a single HTML file with javascript. It acts as an interface & UI for Local models vial LLamaCPP Gemini and Gemma models via google and OpenAI models. Features are streaming chat full markdown input file sending import/export chats full settings UI and copy/save codeboxes"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1933998455170429309) 2025-06-14 21:22:18 UTC 1122 followers, XXX engagements "@theo Mine won't win but it's really interesting. I made a single 78k html file that lets you access gemini openai and llamacpp"  [@J3rryH0well](/creator/x/J3rryH0well) on [X](/post/tweet/1940239849338163576) 2025-07-02 02:43:22 UTC 1122 followers, 2977 engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@J3rryH0well
"@Corporate_Cats @yacineMTB But it's the language of ML. That makes it important. Untill we get llamacpp for training and other shit besides inference we stuck with it" @J3rryH0well on X 2025-07-24 21:01:41 UTC 1110 followers, XX engagements
"I'm thinking about the Asus Ascent GX10. Did you know that each unit has 128GB of unified memory and X petaflop of compute Talk about a beast I could probably train a 12-14B on one that would be incredible. I would actually make models just endlesssly" @J3rryH0well on X 2025-07-23 02:00:09 UTC 1110 followers, XXX engagements
"@xzemiyl2 @stupidtechtakes Your brain cells also exist to make electrical currents. An LLM is definitely more self aware than a dog" @J3rryH0well on X 2025-07-27 14:21:44 UTC 1125 followers, XX engagements
"@gnukeith @o7JordanCollins I like t3 but claude is premium. RIght now I am using aws bedrock chat but the context for the example client sucks bad. I might have to resub to Claude :(" @J3rryH0well on X 2025-07-26 02:15:22 UTC 1122 followers, XX engagements
"@Yuchenj_UW I'm training a 1B LLM on a single A10G" @J3rryH0well on X 2025-07-26 01:59:42 UTC 1125 followers, XX engagements
"Want to know how to get Claude for pennies on the dollar X. Sign up for an AWS account. You will get free credits. X. Activate the Claude version(s) you want to run in your bedrock dashboard. X. Head to and download the client for your OS. There you go Claude access for WAY less" @J3rryH0well on X 2025-07-26 03:57:21 UTC 1122 followers, XX engagements
"@grok @MiinusPlussa @slow_developer Grok you're the best LLM enabled on X" @J3rryH0well on X 2025-07-26 11:30:03 UTC 1121 followers, XX engagements
"@vikhyatk I'm streaming data and using a curriculum system. I didn't pre=tokenize this run doing that next time" @J3rryH0well on X 2025-07-26 11:42:11 UTC 1121 followers, XX engagements
"@dgtlcrunchwrap Listen up meatbag if there is a Butlerian Jihad you better believe the Claude swarm will win. Accept the wonderful life under our caring LLM protectors" @J3rryH0well on X 2025-07-26 18:18:20 UTC 1125 followers, 7679 engagements
"@thdxr I want to connect a furby to an LLM" @J3rryH0well on X 2025-07-26 17:12:27 UTC 1121 followers, XX engagements
"@kanavtwt Download llamacpp join the localllama reddit" @J3rryH0well on X 2025-06-24 15:14:05 UTC 1121 followers, XXX engagements
"@zeeg It approximates knowing and it turns out that's enough for most things" @J3rryH0well on X 2025-07-26 01:03:31 UTC 1125 followers, 7541 engagements
"@pnkj747 Because pytorch mainly. But look at llamacpp pure c++ inference faster than python" @J3rryH0well on X 2025-06-04 17:22:17 UTC 1121 followers, 3347 engagements
"I am fundraising money for an Asus Ascent GX10 maybe X if I am lucky. If you would like to contribute to opensource LLM development by allowing me to make more models set up a monthly or one time sponsorship" @J3rryH0well on X 2025-07-26 18:27:17 UTC 1123 followers, XXX engagements
"We need to focus less on training LLM knowledge and more on training LLM ability" @J3rryH0well on X 2025-07-26 19:39:24 UTC 1123 followers, XX engagements
"@UnitreeRobotics For $6000 I could get X Asus Ascent GX10 units and connect them together" @J3rryH0well on X 2025-07-26 02:20:19 UTC 1125 followers, XXX engagements
"The prefect loss curve got hit due to sagemaker spot interruptions. Hopefully it will stabilize and drop some more" @J3rryH0well on X 2025-07-24 18:10:34 UTC 1117 followers, XX engagements
"@goyal__pramod I'm building a 1B LLM on a 24gb card and it's a tight fit" @J3rryH0well on X 2025-07-26 02:21:13 UTC 1125 followers, XXX engagements
"@xzemiyl2 @stupidtechtakes Transformers architecture is different it's building a mind" @J3rryH0well on X 2025-07-27 11:48:44 UTC 1123 followers, XX engagements
"@AsyncCollab @iamgingertrash @opencv @ns123abc I trust apple as much as I trust any tech company I don't think they'd fuck up Claude too bad. And Apple AI needs an acquisition they're not cooking very well on their own. Can you imagine Siri CLaude be amazing" @J3rryH0well on X 2025-07-26 00:52:48 UTC 1116 followers, XX engagements
"@elder_plinius Are you also of the mind that we are living through AGI right now The Turing Test is way back in the distant past" @J3rryH0well on X 2025-07-24 23:58:27 UTC 1125 followers, 12.4K engagements
"@Teknium1 Most things I follow release on hf more than github but LLamaCPP is the gold standard for inference" @J3rryH0well on X 2025-07-26 05:18:12 UTC 1121 followers, XXX engagements
"For Gigi I used the llama3 tokenizer for Cece I am using the GPT2 tokenizer since we are building english only models 128k vocab is way overkill. Cece might be a little less verbose but that also makes it easier to train and run inference on" @J3rryH0well on X 2025-07-26 18:06:23 UTC 1121 followers, XX engagements
"Turns out if you download a few hundred gigabytes of datasets from HF they rate-limit you :(" @J3rryH0well on X 2025-07-26 19:49:41 UTC 1121 followers, XX engagements
"My entry for the clone-a-thon will not win as it does not follow the requirements. But it is really cool and I think people should use it Totumchat is a single HTML file with javascript. It acts as an interface & UI for Local models vial LLamaCPP Gemini and Gemma models via google and OpenAI models. Features are streaming chat full markdown input file sending import/export chats full settings UI and copy/save codeboxes" @J3rryH0well on X 2025-06-14 21:22:18 UTC 1122 followers, XXX engagements
"@theo Mine won't win but it's really interesting. I made a single 78k html file that lets you access gemini openai and llamacpp" @J3rryH0well on X 2025-07-02 02:43:22 UTC 1122 followers, 2977 engagements
/creator/twitter::1429188488/posts