[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@ivnardini "๐Ÿš€ Deploying open models with Terraform on Vertex AI The UI/SDK isn't scalable for managing all your open-model deployments. Vertex AI Model Garden just launched the google_vertex_ai_endpoint_with_model_garden_deployment Terraform resource. Now you can manage your open model deployment (Hugging Face or Model Garden) with dedicated resources and configuration with one unique Docs and code in the ๐Ÿงต"
X Link @ivnardini 2025-10-08T15:30Z 1170 followers, XXX engagements

"๐Ÿš€ Deploying agents on Vertex AI Agent Engine with Terraform Vertex AI released the google_vertex_ai_reasoning_engine Terraform resource to deploy agents built with custom classes or the agentic frameworks like ADK directly. TLDR: ๐Ÿ“ฆ Package: Just cloudpickle your agent create a requirements.txt tar your dependencies and you're set. ๐Ÿ”’ Secure: Built with support for VPC-SC least-privilege IAM and private networking. ๐Ÿš€ Serverless: After you run terraform apply Agent Engine handles everything elsescaling patching and availability. Check out the notebook and blog post for the full code and a"
X Link @ivnardini 2025-10-15T15:00Z 1170 followers, XXX engagements

"Why choosing: Use Vertex AI Agent Engine with BOTH Cloud Run & GKE One of the top questions I get: Can I use Agent Engine services (like Memory Bank) with my agent running on GKE or Cloud Run The answer is: YES You don't have to choose between fully managed memory/session and your preferred runtime. Use Agent Engine for the managed agent ops services and deploy your agent on GKE/Cloud Run. Check out two new hands-on tutorials in ๐Ÿงต from @vladkol showing exactly how to build AI agents with the Agent Development Kit (ADK) + Vertex AI Agent Engine for managed Sessions & Memory"
X Link @ivnardini 2025-10-16T15:00Z 1170 followers, 1524 engagements

"This morning I spent some time in the Vertex AI documentation and was impressed by the open-source models available as APIs. Model as a Service (MaaS) gives you access to very large open models via a fully managed serverless Chat Completion API. The key takeaway: there's no need to provision or manage your own infrastructure. You just call the model. The list of curated models available is stacked: Llama (4 XXX XXX 3.1) Qwen3-Next gpt-oss DeepSeek and even embedding models like multilingual-e5. Check out the new documentation in ๐Ÿงต to get started"
X Link @ivnardini 2025-10-16T17:30Z 1170 followers, XXX engagements

"Great reading on how to integrate your preferred runtime with Vertex AI Agent Engine platform capabilities"
X Link @ivnardini 2025-10-14T20:18Z 1168 followers, 1452 engagements

"๐Ÿš€ vLLM on TPU just got a massive upgrade Google and vLLM have announced a new unified backend for vLLM TPU using tpu-inference. It uses a single JAXXLA lowering path to run both PyTorch and JAX models performantly on TPUs. TLDR: XX% more throughput for PyTorch models (no code changes) New RPA v3 kernel: more flexible & XX% faster SPMD by Default: native XLA optimizations. Up to 5x perf gains on Llama3-8B Check out the full blog and try it on Vertex AI with the new vLLM TPU container"
X Link @ivnardini 2025-10-16T21:00Z 1170 followers, XXX engagements

"Blog: Docs: Notebook:"
X Link @ivnardini 2025-10-16T21:00Z 1170 followers, XX engagements