[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@nexa_ai
"Day-0 support is how we roll 🫡 Thrilled to bring @IBM Granite XXX to life on @Snapdragon platforms (NPU/GPU/CPU) with @Qualcomm. On-device AI will be everywhere"
X Link @nexa_ai 2025-10-03T21:06Z 2181 followers, XXX engagements
"Screenshots and photos arent just memories theyre the notes we try to keep: slides from a talk posters from an event a page from a book a receipt or a chat we want to revisit. The problem: they pile up. The insight gets lost. Hyperlink fixes this: - Ask your photos in natural language - Get AI summaries with the source image cited - XXX% on-device & private (no uploads) - No quotas unlimited context Not just search. Answers you can trust. Finally nothing you save is lost anymore"
X Link @nexa_ai 2025-09-28T19:47Z 2183 followers, XXX engagements
"@Aero96193997 @Alibaba_Qwen @Apple @Qualcomm @nvidia @intel @MediaTek @AMD Qwen3-VL-4B takes about 6GB VRAM"
X Link @nexa_ai 2025-10-15T07:47Z 2279 followers, XX engagements
"We ran @OpenAI GPTOSS 20B fully local on a phone. Heres how:"
X Link @nexa_ai 2025-10-11T17:19Z 2279 followers, 1735 engagements
"Recently one of our teammates had a 6-hour flight and a report due that night. No Wi-Fi. Hundreds of notes and files to sort through. Hyperlink running @OpenAI GPT-OSS locally pulled insights from all of them instantly searching summarizing connecting dots and helping them write. It felt like having ChatGPT built into their computer but fully local & offline. Also a sneak peek of the new Hyperlink UI weve been testing 👀"
X Link @nexa_ai 2025-10-12T19:20Z 2279 followers, 9447 engagements
"Day-0 support for @IBM Granite XXX is live. With NexaSDK you can run Granite-4.0-Micro (3B) on @Qualcomm NPUs with a single line of code and seamlessly switch between GPU & CPU in the same SDK. Now available on Qualcomms newest platforms: @Snapdragon X2 Elite PCs and Snapdragon X Elite Gen X smartphones (Demo below) ⚡ Purpose-built for high-volume low-latency tasks Granite-4.0-Micro powers state-of-the-art on-device agents RAG pipelines and real-time assistants. Were proud to partner with @IBM and @Qualcomm to make Granite XXX deployment-ready from day zero across PCs smartphones automotive"
X Link @nexa_ai 2025-10-02T15:10Z 2220 followers, 87.9K engagements
"@_JoshuaTimothy @Alibaba_Qwen @Apple @Qualcomm @nvidia @intel @MediaTek @AMD Update: We have figured out a way to fix this. This happens to a selective portion of images. Will be rolling out the fix soon. Join our discord to receive latest update:"
X Link @nexa_ai 2025-10-15T21:16Z 2279 followers, XX engagements
"@Alibaba_Qwen You can already run Qwen3-VL-4B & 8B locally Day-0 on NPU/GPU/CPU using MLX GGUF and NexaML. Check out NexaSDK"
X Link @nexa_ai 2025-10-14T17:34Z 2279 followers, 2044 engagements
"Nexa AI at IBM #TechXchange 2025 Join Alan Zhu @alanzhuly from Nexa AI at #IBMTechXchange 2025 to learn how on-device AI is becoming the next leap in intelligence and how Nexa AI is leading this movement toward XXX% private local and offline AI. 📅 Wednesday Oct X Granite Theater Orlando FL - Speaker Session: 3:30 PM X :30 PM ET - Demo Session: 11:30 AM 12:00 PM ET"
X Link @nexa_ai 2025-10-06T23:37Z 2182 followers, XXX engagements
"@_JoshuaTimothy @Alibaba_Qwen @Apple @Qualcomm @nvidia @intel @MediaTek @AMD We are looking into this now Thanks for reporting"
X Link @nexa_ai 2025-10-15T07:46Z 2279 followers, XX engagements
"🚀 NexaML Supports AMD NPU for SDXL Image Generation We're excited to announce AMD NPU support for SDXL image generation bringing high-quality fully on-device image synthesis to AMD devices. Getting started with AMD NPU acceleration: nexa infer NexaAI/sdxl-turbo-amd-npu What this unlocks for developers: - Creative applications with instant image generation - Content creation tools without cloud dependency - Privacy-first image synthesis for enterprise use Try SDXL on AMD NPU today and experience the future of on-device AI image generation"
X Link @nexa_ai 2025-10-01T19:01Z 2271 followers, XXX engagements
"🚀 Nexa SDK is live on Product Hunt today Were on a mission to make AI deployment radically simple run any model on any device with one SDK. 👉 Help us spread the word & support the launch: Why it matters Deploying multimodal AI shouldnt be complex. With Nexa SDK you can: - Run your favorite models instantly (LLM VLM TTS ASR image generation .) - On any hardware: NPU GPU or CPU - With one command and OpenAI-compatible APIs Our Core Values - Universal Hardware Support: Qualcomm Hexagon Intel AI Boost Apple Neural Engine NVIDIA AMD Intel iGPUs Apple Silicon and more - Complete AI Stack: - Text:"
X Link @nexa_ai 2025-09-29T14:50Z 2273 followers, 69.2K engagements
"@ClementDelangue @OpenAI Lets do it. Messaged you"
X Link @nexa_ai 2025-10-11T22:39Z 2279 followers, XX engagements
"Run Granite X today on NPU GPU and CPU with NexaSDK"
X Link @nexa_ai 2025-10-02T15:10Z 2181 followers, XXX engagements
"Pyannote's brand-new model speaker-diarization-community-1 now runs on Qualcomm NPU with NexaSDK (Day-0 Support). It identifies who speaks when the core building block for transcription (meetings healthcare calls intelligence) and media processing like dubbing. We are excited to partner with @pyannoteAI and @Qualcomm to bring the worlds most popular speaker diarization model to NPU. These state-of-the-art capabilities are now accessible to developers across billions of devices phones PCs IoT XR and automotive. Faster. More energy-efficient. XXX% on-device. Demo below"
X Link @nexa_ai 2025-09-30T15:52Z 2181 followers, XXX engagements
"Matthew McConaugheys dream private LLM Already exists. Its called Hyperlink - an offline private AI agent that knows every file you own. Search files and discover buried insights with it. AI runs XXX% local. Hey Matthew want to try it Anyone can set up in minutes"
X Link @nexa_ai 2025-09-23T15:36Z 2247 followers, 139.8K engagements
"Try Hyperlink for free on your flights:"
X Link @nexa_ai 2025-10-12T19:20Z 2271 followers, XXX engagements
"Nexa SDK is live on Product Hunt Featured at Snapdragon Summit 2025 we heard a game-changing message: "AI is the new UI". Just as modern development frameworks simplified web application creation Nexa SDK makes local AI app development effortless through day-zero model support and developer-first design. Through our strategic partnerships with Qualcomm we're pioneering the future of AI development with day-zero NPU support on next-generation PC and mobile platforms. Developers and OEMs no longer need to juggle fragmented tools or complex setupsapps are NPU-optimized out of the box and"
X Link @nexa_ai 2025-09-29T14:59Z 2183 followers, XXX engagements
"Step-by-step instruction"
X Link @nexa_ai 2025-10-02T15:10Z 2181 followers, XXX engagements
"Thrilled to speak and demo at @IBM #TechXchange in Orlando this week @alanzhuly shared how were advancing the frontier of on-device AI showcasing: ⚡ IBM Granite XXX running lightning-fast on @Qualcomm NPU the first Day-0 model support in NPU history. 💻 Hyperlink the worlds first local AI app that runs the latest models on NPU/GPU/CPU turning your computer into an agentic assistant that can search and reason across all your files privately and offline. ⚙ NexaML and NexaSDK our CUDA-like software layer and SDK for NPUs built from scratch to run any model on any backend with multimodal support"
X Link @nexa_ai 2025-10-10T16:21Z 2275 followers, XXX engagements
"🚀Day X Support Qwen3-VL-30B-A3B-Instruct on NexaSDK Were excited to announce Day X support for Qwen3-VL-30B-A3B-Instruct a breakthrough in multimodal intelligence now running natively on NexaSDK. Weve added full support for the MLX Engine on @Apple Silicon GPUs enabling developers to run Qwen3-VL locally with a single command: nexa infer NexaAI/qwen3vl-30B-A3B-mlx Note: Running this model requires at least 64GB of RAM on Mac. At Nexa were delivering Day X support for state-of-the-art models. @Alibaba_Qwen'Qwen3-VL-30B-A3B-Instruct is just the beginning. Thanks @JustinLin610 for the"
X Link @nexa_ai 2025-10-04T19:49Z 2181 followers, 4858 engagements
"@Alibaba_Qwen Appreciate the partnership 🤝🤝🤝"
X Link @nexa_ai 2025-10-14T17:43Z 2275 followers, XXX engagements
"Check out NexaSDK GitHub to get started:"
X Link @nexa_ai 2025-10-14T17:29Z 2279 followers, XXX engagements
"🚀 Nexa SDK is live on Product Hunt At NEXA AI we believe AI should be fast private and available anywhere not locked to the cloud. Today that vision is real with Nexa SDK now running seamlessly on Intel CPUs iGPUs & NPUs. What you can do with Nexa SDK: Run models like Gemma LLaMA Qwen Whisper Stable Diffusion locally Get acceleration across Intel integrated GPUs NPUs and CPUs Build multimodal apps (text vision audio) in minutes Use OpenAI-compatible API with function calling & streaming Multimodal Server: As shown in the demo video you can run image + text tasks fully locally with nexa serve"
X Link @nexa_ai 2025-09-29T15:15Z 2181 followers, XXX engagements
"The best vision-language models just went fully on-device - Day-0 on NPU GPU and CPU. Qwen3-VL-4B and 8B from @Alibaba_Qwen now run locally across @Apple @Qualcomm @NVIDIA @Intel @MediaTek and @AMD devices with NexaSDK Every line of model inference code in NexaML GGML and MLX was built from scratch by Nexa for SOTA performance on each hardware stack powered by Nexas unified inference engine. One line to run the latest VLM Day-0 on every backend Qualcomm NPU (NexaML): nexa infer NexaAI/Qwen3-VL-4B-Instruct-NPU nexa infer NexaAI/Qwen3-VL-4B-Thinking-NPU CPU/GPU for everyone (GGML): nexa infer"
X Link @nexa_ai 2025-10-14T17:29Z 2279 followers, 238.6K engagements
"Try gpt-oss on mobile (need =16GB RAM) on Nexa Studio:"
X Link @nexa_ai 2025-10-06T16:16Z 2275 followers, 5747 engagements
"Our pro customers get access to these advanced optimization tools and techniques to achieve maximum NPU performance for their specific use cases. Book a call today to unlock AI performance on your NPUs:"
X Link @nexa_ai 2025-10-03T17:45Z 2279 followers, XXX engagements
"@simonw Thanks @simonw. We just shared on HackerNews - a full breakdown of how we got GPT-OSS 20B running locally on a phone. Would love your thoughts on it"
X Link @nexa_ai 2025-10-11T18:52Z 2181 followers, XXX engagements
"Get started with the example project and try on your Snapdragon NPU"
X Link @nexa_ai 2025-10-13T16:04Z 2279 followers, XXX engagements
"Sam Altman recently said:GPT-OSS has strong real-world performance comparable to o4-miniand you can run it locally on your phone. Many believed running a 20B-parameter model on mobile devices was still years away. AtNexa AI weve built our foundation on deep on-device AI technologyturning that vision into reality. TodayGPT-OSSis runningfully localon mobile devices through our appNexa Studio. Real performance on @Snapdragon Gen 5: - XX tokens/sec decoding speed - X seconds Time-to-First-Token Developers can now useNexaSDKto build their own local AI apps powered by GPT-OSS. What this unlocks: -"
X Link @nexa_ai 2025-10-06T16:10Z 2279 followers, 190.4K engagements
"@Alibaba_Qwen Qwen3-VL-30B-A3B is awesome. Can't wait to run this one on device"
X Link @nexa_ai 2025-10-04T02:56Z 2279 followers, 1227 engagements
"Under the Hood: How NexaML Delivers Industry-Leading NPU inference Ever wondered what makes NexaML's Qualcomm NPU optimization effective It starts with precision profiling and ends with graph-level optimization that unlocks true NPU potential. Our Two-Stage Performance Advantage: X Ops-Level Profiling Intelligence Our internal NexaML toolkit includes advanced profiling tools that measure latency for every single operation. This granular visibility guides our graph inference optimization - we know exactly where every millisecond goes and how to eliminate bottlenecks. X Graph Inference"
X Link @nexa_ai 2025-10-03T17:45Z 2275 followers, XXX engagements
"We just built the worlds first fully NPU-supported local RAG pipeline retrieval rerank and generation all run entirely on the @Qualcomm NPU with SOTA models. XX less power. Always-on. XXX% private. Cooler hands. Models: - Embedding: @GoogleDeepMind EmbeddingGemma-300M - Rerank: @JinaAI_ Reranker v2 - Generate: @IBMResearch Granite 4.0-Micro Together full-stack SOTA retrieval + generation on NPU. Demo & example project for @Snapdragon AI PC below 👇"
X Link @nexa_ai 2025-10-13T16:04Z 2279 followers, 16.3K engagements
"Learn more in Blog:"
X Link @nexa_ai 2025-10-14T17:29Z 2279 followers, XXX engagements