mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

333K
active users

#runpod

0 posts0 participants0 posts today

Setup ComfyUI from Scratch & Run Flux Diffusion Model on RunPod – Step by Step!

Want to set up ComfyUI for seamless AI image generation? This tutorial covers everything—from ComfyUI installation, RunPod setup, Flux Schnell model integration, to Hugging Face model downloads. Follow the step-by-step guide with ready-to-use commands and get started with AI-powered creativity today! 🎨✨

👉 Read the full guide here: mobisoftinfotech.com/resources

Last week I used #Replicate to crunch through the #GLAMhack projects as my physical hardware (what do you mean you didn't bring GPUs to a hackathon...😉 ) was not cooperating - but for more code-level data science work #Runpod is proving excellent here at the #EnergyDataHackdays ⚡🔓 here is my referral link: runpod.io/?ref=yqg82xxd

runpod.ioRunPod - The Cloud Built for AIDevelop, train, and scale AI models in one cloud. Spin up on-demand GPUs with GPU Cloud, scale ML inference with Serverless.

The V3 update adds video-to-video functionality. For those interested in using LivePortrait but lacking a powerful GPU, using a Mac, or preferring cloud-based solutions, this tutorial is ideal. It guides you through one-click installation and usage of LivePortrait on , , and free accounts. After this tutorial, you'll find cloud-based LivePortrait as easy to use as a local installation. youtu.be/wG7oPp01COg

youtu.be- YouTubeEnjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

Стартап RunPod, що спеціалізується на інфраструктурі штучного інтелекту, залучив значне початкове фінансування у розмірі 20 мільйонів доларів США від Dell Technologies Capital та Intel Capital.

thetransmitted.com/ai/dell-ta-

Continued thread

At the moment I am translating lyrics. Running it locally to translate non-English lyrics I get a response time of anywhere around 60 seconds per lyric. on the around 5 seconds.
My hardware is a with 32 gb memory, no gpu.
Only weirdness I notice that it translated 'zonde' (in this context meaning 'a waste' as 'sinful'

2nd day I am running on a pod.
Not that hard to set up (once you know)
1) create a pod of your liking (but it should be a gpu pod) I used the latest RunPod Pytorch as a template 2.2.10
2) add port 11434 to exposed ports
3) add OLLAMA_HOST: 0.0.0.0
4) Start it up and ssh into it
(assuming you have the keys added as needed.
5) run the install script from ollama.ai
6) ollama serve &
7) ollama pull [the models you want]

ollama.aiOllamaGet up and running with large language models.