mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

355K
active users

#infiniband

0 posts0 participants0 posts today
Gabriele Svelto<p>Nvidia has been doing a lot of useless stuff lately, but this is actually a big deal. I wonder what the latency looks like on these switches. Traditionally direct-attach copper has always been the preferred choice for low-latency applications, with optics used for longer connections where latency matters less. I'm curious if this is going to change that.</p><p><a href="https://www.techpowerup.com/334337/nvidia-commercializes-silicon-photonics-with-infiniband-and-ethernet-switches" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">techpowerup.com/334337/nvidia-</span><span class="invisible">commercializes-silicon-photonics-with-infiniband-and-ethernet-switches</span></a></p><p><a href="https://fosstodon.org/tags/Infiniband" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Infiniband</span></a> <a href="https://fosstodon.org/tags/Ethernet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ethernet</span></a></p>
Peter J. Welcher<p>New blog “How AI Ate My Blog on RoCEv2”. <a href="https://techfieldday.net/tags/PeterWelcher" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PeterWelcher</span></a> <a href="https://techfieldday.net/tags/CCIE1773" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CCIE1773</span></a> <a href="https://techfieldday.net/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://techfieldday.net/tags/ECN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ECN</span></a> <a href="https://techfieldday.net/tags/PFC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PFC</span></a> <a href="https://techfieldday.net/tags/RoCEV2" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RoCEV2</span></a> <a href="https://techfieldday.net/tags/Infiniband" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Infiniband</span></a>. URL: <a href="https://www.linkedin.com/pulse/ai-ate-my-blog-rocev2-peter-welcher-uu9ue/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">linkedin.com/pulse/ai-ate-my-b</span><span class="invisible">log-rocev2-peter-welcher-uu9ue/</span></a>.</p>
Asterfusion<p>🚀 RoCE vs. InfiniBand: The Game-Changing Data Center Switch Test Results Revealed! ⚡</p><p>In AI and HPC networks, RoCE (RDMA over Converged Ethernet) and InfiniBand (IB) are often the go-to choices. Both offer low-latency, lossless transmission, but they come with key differences.</p><p>🔍 InfiniBand: A mature, low-latency protocol with specialized hardware, but higher TCO due to single-vendor reliance.</p><p>🔍 RoCEv2: More cost-effective, interoperable, and ideal for large-scale deployments like xAI’s AI cluster in Memphis!</p><p>Which one fits your needs? See the full comparison! 🔥</p><p><a href="https://techhub.social/tags/RoCE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RoCE</span></a> <a href="https://techhub.social/tags/InfiniBand" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InfiniBand</span></a> <a href="https://techhub.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://techhub.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://techhub.social/tags/DataCenter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataCenter</span></a> <a href="https://techhub.social/tags/Networking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Networking</span></a> <a href="https://techhub.social/tags/TechComparison" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechComparison</span></a> <a href="https://techhub.social/tags/Ethernet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ethernet</span></a> <a href="https://techhub.social/tags/NetworkOptimization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NetworkOptimization</span></a><br><a href="https://cloudswit.ch/blogs/how-will-deepseek-shake-up-white-box-networking/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">cloudswit.ch/blogs/how-will-de</span><span class="invisible">epseek-shake-up-white-box-networking/</span></a></p>
Habr<p>Невероятная мощь NVIDIA GB200 NVL72: Внутри гиганта ИИ-вычислений</p><p>Привет, Хабр! Если вас всегда интересовало, как устроены по-настоящему производительные системы , вы попали по адресу. В сегодняшней статье мы расскажем, как Nvidia объединила сразу 72 ускорителя B200 в единый CUDA процессор GB200 NVL72 . Узнаем, как для создания эффективного интерконнекта используются технологии NVLink , Ethernet и Infiniband . Предметный разговор об аппаратной части уже ждет вас под кнопкой «Читать далее».</p><p><a href="https://habr.com/ru/companies/serverflow/articles/864314/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">habr.com/ru/companies/serverfl</span><span class="invisible">ow/articles/864314/</span></a></p><p><a href="https://zhub.link/tags/%D1%81%D0%B5%D1%80%D0%B2%D0%B5%D1%80_%D1%84%D0%BB%D0%BE%D1%83" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>сервер_флоу</span></a> <a href="https://zhub.link/tags/GB200_NVL72" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GB200_NVL72</span></a> <a href="https://zhub.link/tags/nvlink" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvlink</span></a> <a href="https://zhub.link/tags/blackwell" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>blackwell</span></a> <a href="https://zhub.link/tags/nvidia_grace" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia_grace</span></a> <a href="https://zhub.link/tags/Nvidia_Superchip" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia_Superchip</span></a> <a href="https://zhub.link/tags/NVLink_Spine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NVLink_Spine</span></a> <a href="https://zhub.link/tags/infiniband" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>infiniband</span></a> <a href="https://zhub.link/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://zhub.link/tags/SeverFlow" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SeverFlow</span></a></p>
David Grayless<p>VMEBus Switched Serial, commonly known as VXS, is an <a href="https://mastodon.social/tags/ANSI" class="mention hashtag" rel="tag">#<span>ANSI</span></a> standard (ANSI/VITA 41) that improves the performance of standard parallel <a href="https://mastodon.social/tags/VMEbus" class="mention hashtag" rel="tag">#<span>VMEbus</span></a> by enhancing it to support newer switched serial fabrics. The base specification (ANSI 41) defines all common elements of the standard, while &quot;dot&quot;-specifications (ANSI 41.n) define extensions which use specific serial fabrics (such as <a href="https://mastodon.social/tags/PCIExpress" class="mention hashtag" rel="tag">#<span>PCIExpress</span></a>, <a href="https://mastodon.social/tags/RapidIO" class="mention hashtag" rel="tag">#<span>RapidIO</span></a>, StarFabric from <a href="https://mastodon.social/tags/DolphinInterconnectSolutions" class="mention hashtag" rel="tag">#<span>DolphinInterconnectSolutions</span></a> and <a href="https://mastodon.social/tags/InfiniBand" class="mention hashtag" rel="tag">#<span>InfiniBand</span></a>) or additional functionality.</p>
Mike P<p>Apparently if you push a wookie, you can expect to get a cookie in response.</p><p>I&#39;m not sure I&#39;ll be trying this one myself.</p><p><a href="https://mastodon.social/tags/rdma" class="mention hashtag" rel="tag">#<span>rdma</span></a> <a href="https://mastodon.social/tags/infiniband" class="mention hashtag" rel="tag">#<span>infiniband</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p>What If <a href="https://hachyderm.io/tags/OmniPath" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OmniPath</span></a> Morphs Into The Best <a href="https://hachyderm.io/tags/UltraEthernet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UltraEthernet</span></a>?<br>Many <a href="https://hachyderm.io/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> centers in <a href="https://hachyderm.io/tags/US" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>US</span></a> – importantly <a href="https://hachyderm.io/tags/Sandia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sandia</span></a> and <a href="https://hachyderm.io/tags/LawrenceLivermore" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LawrenceLivermore</span></a> as well as the Texas Advanced Computing Center (<a href="https://hachyderm.io/tags/TACC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TACC</span></a>) – wanted an alternative to <a href="https://hachyderm.io/tags/InfiniBand" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InfiniBand</span></a> or proprietary interconnects like <a href="https://hachyderm.io/tags/HPE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPE</span></a>/#Cray’s Slingshot, and they have been funding the redevelopment of Omni-Path. And now, <a href="https://hachyderm.io/tags/CornelisNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CornelisNetworks</span></a> is going to be intersecting its roadmap with Omni-Path switches and adapters with the <a href="https://hachyderm.io/tags/UEC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UEC</span></a> roadmap.<br><a href="https://www.nextplatform.com/2024/06/26/what-if-omni-path-morphs-into-the-best-ultra-ethernet/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nextplatform.com/2024/06/26/wh</span><span class="invisible">at-if-omni-path-morphs-into-the-best-ultra-ethernet/</span></a></p>
aijobs.net<p>HIRING: Principal GPU Capacity and Resource Management Engineer / US, CA, Santa Clara<br>💰 USD 272K+</p><p>👉 <a href="https://ai-jobs.net/J326446/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ai-jobs.net/J326446/</span><span class="invisible"></span></a></p><p><a href="https://mstdn.social/tags/Ansible" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ansible</span></a> <a href="https://mstdn.social/tags/ComputerScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputerScience</span></a> <a href="https://mstdn.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mstdn.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> <a href="https://mstdn.social/tags/Engineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Engineering</span></a> <a href="https://mstdn.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://mstdn.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://mstdn.social/tags/InfiniBand" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InfiniBand</span></a> <a href="https://mstdn.social/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Kubernetes</span></a> <a href="https://mstdn.social/tags/Linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linux</span></a></p>
Habr<p>InfiniBand в Windows — это просто</p><p>К написанию этой небольшой инструкции меня привела статья на Хабре - Быстрая сеть в домашней лаборатории или как я связался с InfiniBand . Я был очень заинтригован данным вопросом, но каково было моё удивление, когда я не мог найти почти никакой информации по InfiniBand на Windows в домашних условиях, например, в домашней лаборатории или в небольшом офисе. Информация была, конечно. Было описано, как использовали InfiniBand, какое оборудование использовали и о полученной производительности сети. Всё точно как у товарища из упомянутой статьи выше. Но не было информации о том, как поднять домашнюю сеть на IB, как настроить её и вообще с чего начать. Проведя время в интернете, я пришёл к выводу, что большинство пользователей, даже тех, кто знаком с сетями и их настройкой, тупо боятся слова InfiniBand. Для них это что-то сложное, используемое мегакорпорациями для создания суперсетей для суперкомпьютеров. А сочетание слов "InfiniBand дома" приводит их в ужас. А если ещё и коммутатор неуправляемый... Ну, вы поняли. Из той немногой информации, что мне удалось найти, я вычленил и написал простую инструкцию для новичков по InfiniBand в домашних условиях: какое оборудование нужно, как установить драйвера, как настроить сеть IB между несколькими ПК и неуправляемым коммутатором. Итак, давайте начнем!</p><p><a href="https://habr.com/ru/articles/797759/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/797759/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/infiniband" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>infiniband</span></a> <a href="https://zhub.link/tags/40g" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>40g</span></a> <a href="https://zhub.link/tags/mellanox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mellanox</span></a> <a href="https://zhub.link/tags/%D0%BA%D0%BE%D0%BC%D0%BC%D1%83%D1%82%D0%B0%D1%82%D0%BE%D1%80" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>коммутатор</span></a> <a href="https://zhub.link/tags/%D1%81%D0%B5%D1%82%D0%B5%D0%B2%D0%B0%D1%8F_%D0%BA%D0%B0%D1%80%D1%82%D0%B0" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>сетевая_карта</span></a> <a href="https://zhub.link/tags/%D0%B2%D1%8B%D1%81%D0%BE%D0%BA%D0%B0%D1%8F_%D0%BF%D1%80%D0%BE%D0%B8%D0%B7%D0%B2%D0%BE%D0%B4%D0%B8%D1%82%D0%B5%D0%BB%D1%8C%D0%BD%D0%BE%D1%81%D1%82%D1%8C" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>высокая_производительность</span></a></p>
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://aus.social/@melissabeartrix" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>melissabeartrix</span></a></span> then consider <a href="https://infosec.space/tags/iSCSI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>iSCSI</span></a> or <a href="https://infosec.space/tags/FCoE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FCoE</span></a> (<a href="https://infosec.space/tags/FibreChannel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FibreChannel</span></a> over <a href="https://infosec.space/tags/Ethernet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ethernet</span></a>) over <a href="https://infosec.space/tags/OM5" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OM5</span></a>-fiber - based <a href="https://infosec.space/tags/100GBASE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>100GBASE</span></a>-SR1.2 as per 802.3bm-2015 Ethernet.</p><p>Just make shure your devices have <a href="https://infosec.space/tags/QSFP28" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>QSFP28</span></a> ports to chug' in the LC-Duplex connectors of the fibers amd support 64k Jumbo Frames...</p><p>Cuz unlike <a href="https://infosec.space/tags/InfiniBand" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InfiniBand</span></a> that stuff is at least long-term useful and salvageable...</p><p><a href="https://en.wikipedia.org/wiki/100_Gigabit_Ethernet" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/100_Giga</span><span class="invisible">bit_Ethernet</span></a><br><a href="https://en.wikipedia.org/wiki/ISCSI" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">en.wikipedia.org/wiki/ISCSI</span><span class="invisible"></span></a><br><a href="https://en.wikipedia.org/wiki/Fibre_Channel" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Fibre_Ch</span><span class="invisible">annel</span></a><br><a href="https://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Fibre_Ch</span><span class="invisible">annel_over_Ethernet</span></a><br><a href="https://en.wikipedia.org/wiki/Fibre_Channel_over_IP" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Fibre_Ch</span><span class="invisible">annel_over_IP</span></a><br><a href="https://en.wikipedia.org/wiki/ISCSI_Extensions_for_RDMA" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/ISCSI_Ex</span><span class="invisible">tensions_for_RDMA</span></a><br><a href="https://en.wikipedia.org/wiki/InfiniBand" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/InfiniBa</span><span class="invisible">nd</span></a></p>
Compilando Podcast<p>InfiniBand <a href="https://www.eduardocollado.com/2024/01/02/infiniband/" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">eduardocollado.com/2024/01/02/</span><span class="invisible">infiniband/</span></a> <a href="https://mastodon.social/tags/Hpc" class="mention hashtag" rel="tag">#<span>Hpc</span></a>, <a href="https://mastodon.social/tags/IA" class="mention hashtag" rel="tag">#<span>IA</span></a>, <a href="https://mastodon.social/tags/Infiniband" class="mention hashtag" rel="tag">#<span>Infiniband</span></a>, <a href="https://mastodon.social/tags/InteligenciaArtificial" class="mention hashtag" rel="tag">#<span>InteligenciaArtificial</span></a>, <a href="https://mastodon.social/tags/Supercomputaci%C3%B3n" class="mention hashtag" rel="tag">#<span>Supercomputación</span></a></p>
HPC Guru<p>Nvidia data center roadmap: <a href="https://mastodon.social/tags/Hopper" class="mention hashtag" rel="tag">#<span>Hopper</span></a> refresh in 2024, transition to <a href="https://mastodon.social/tags/Blackwell" class="mention hashtag" rel="tag">#<span>Blackwell</span></a> later in 2024 with another architecture in 2025</p><p>Nvidia now breaks out its Arm-based products and its x86-based products, with <a href="https://mastodon.social/tags/ARM" class="mention hashtag" rel="tag">#<span>ARM</span></a> on top</p><p>On the <a href="https://mastodon.social/tags/networking" class="mention hashtag" rel="tag">#<span>networking</span></a> side, both <a href="https://mastodon.social/tags/Infiniband" class="mention hashtag" rel="tag">#<span>Infiniband</span></a> and <a href="https://mastodon.social/tags/Ethernet" class="mention hashtag" rel="tag">#<span>Ethernet</span></a> are going to progress from 400Gbps to 800Gbps in 2024 and then to 1.6Tbps in 2025 </p><p>Something missing from the roadmap is the NVSwitch/ NVLink roadmap</p><p><a href="https://servethehome.com/nvidia-data-center-roadmap-with-gx200nvl-gx200-x100-and-x40-ai-chips-in-2025/" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">servethehome.com/nvidia-data-c</span><span class="invisible">enter-roadmap-with-gx200nvl-gx200-x100-and-x40-ai-chips-in-2025/</span></a></p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://mastodon.social/tags/HPC" class="mention hashtag" rel="tag">#<span>HPC</span></a> <a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="tag">#<span>GPU</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/Meta" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Meta</span></a> Platforms Is Determined To Make <a href="https://hachyderm.io/tags/Ethernet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ethernet</span></a> Work For <a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a><br><a href="https://hachyderm.io/tags/MetaPlatforms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MetaPlatforms</span></a> is one of the founding companies behind the <a href="https://hachyderm.io/tags/UltraEthernet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UltraEthernet</span></a> Consortium. What unites these companies – <a href="https://hachyderm.io/tags/Broadcom" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Broadcom</span></a>, <a href="https://hachyderm.io/tags/Cisco" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cisco</span></a>, and <a href="https://hachyderm.io/tags/HewlettPackardEnterprise" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HewlettPackardEnterprise</span></a> for switch <a href="https://hachyderm.io/tags/ASIC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ASIC</span></a> (and soon <a href="https://hachyderm.io/tags/Marvell" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Marvell</span></a> we think), <a href="https://hachyderm.io/tags/Microsoft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft</span></a> and Meta among the titans, and Cisco, <a href="https://hachyderm.io/tags/HPE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPE</span></a>, and <a href="https://hachyderm.io/tags/Arista" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Arista</span></a> Networks among the switch makers – is common enemy: <a href="https://hachyderm.io/tags/InfiniBand" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InfiniBand</span></a>. Figuring out way make Ethernet as good as IB for AI &amp; <a href="https://hachyderm.io/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://hachyderm.io/tags/networking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>networking</span></a><br><a href="https://www.nextplatform.com/2023/09/26/meta-platforms-is-determined-to-make-ethernet-work-for-ai/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nextplatform.com/2023/09/26/me</span><span class="invisible">ta-platforms-is-determined-to-make-ethernet-work-for-ai/</span></a></p>
flatplanet<p>Nice to see that they are working to avoid <a href="https://mastodon.ie/tags/infiniband" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>infiniband</span></a> for <a href="https://mastodon.ie/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.ie/tags/networks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>networks</span></a> <a href="https://mastodon.ie/tags/ultraethernet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ultraethernet</span></a> <a href="https://ultraethernet.org/" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="">ultraethernet.org/</span><span class="invisible"></span></a></p>
HPC Guru<p>Ultra Ethernet Consortium</p><p><a href="https://mastodon.social/tags/InfiniBand" class="mention hashtag" rel="tag">#<span>InfiniBand</span></a> is controlled by one vendor, and the hyperscalers and cloud builders hate that, and it is not Ethernet, and they hate that, too</p><p>They want one protocol to scale to 1 million endpoints in a single network</p><p>They also want a new implementation of RDMA that is more efficient and more scalable than either InfiniBand or Ethernet with RoCE</p><p><a href="https://www.nextplatform.com/2023/07/20/ethernet-consortium-shoots-for-1-million-node-clusters-that-beat-infiniband/" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">nextplatform.com/2023/07/20/et</span><span class="invisible">hernet-consortium-shoots-for-1-million-node-clusters-that-beat-infiniband/</span></a></p><p><a href="https://mastodon.social/tags/HPC" class="mention hashtag" rel="tag">#<span>HPC</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://mastodon.social/tags/network" class="mention hashtag" rel="tag">#<span>network</span></a></p>
flatplanet<p><a href="https://www.nextplatform.com/2023/06/22/cisco-guns-for-infiniband-with-silicon-one-g200/" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nextplatform.com/2023/06/22/ci</span><span class="invisible">sco-guns-for-infiniband-with-silicon-one-g200/</span></a> <a href="https://mastodon.ie/tags/networking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>networking</span></a> <a href="https://mastodon.ie/tags/ethernet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ethernet</span></a> <a href="https://mastodon.ie/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.ie/tags/infiniband" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>infiniband</span></a> <a href="https://mastodon.ie/tags/cisco" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cisco</span></a></p>
Mina<p>I'm looking for recommendations of a cheap / used PC that can run <a href="https://cathode.church/tags/FreeBSD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FreeBSD</span></a> and has at least two PCIe slots for <a href="https://cathode.church/tags/InfiniBand" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InfiniBand</span></a> cards</p><p>(also looking for hints for cheap FreeBSD compatible Infiniband cards that aren't Mellanox ;)</p>
Gabriele Svelto<p>Eleven years ago I volunteered to add native <a href="https://fosstodon.org/tags/Infiniband" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Infiniband</span></a> / <a href="https://fosstodon.org/tags/RDMA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RDMA</span></a> support to <a href="https://fosstodon.org/tags/ZeroMQ" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZeroMQ</span></a>. At the time I was working on high-performance networking and I thought it was a nice challenge... but shortly afterwards I landed my job at <span class="h-card"><a href="https://mastodon.social/@mozilla" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>mozilla</span></a></span> and never finished it.</p><p>Since then I've been contacted multiple times by people who wished to finish my work but none succeeded. Last time was yesterday. Maybe I should give it a spin again: <a href="https://zeromq-dev.zeromq.narkive.com/a3hbU2H1/contributing-native-infiniband-rdma-support-to-0mq" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">zeromq-dev.zeromq.narkive.com/</span><span class="invisible">a3hbU2H1/contributing-native-infiniband-rdma-support-to-0mq</span></a></p>
Glenn K. Lockwood<p>The dates for the 2023 MVAPICH User Group (<a href="https://mast.hpc.social/tags/MUG23" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MUG23</span></a>) have been set as August 21-23 and, as usual, it will be near OSU in Columbus. Despite its name, MUG a great snapshot of the state of the practice in <a href="https://mast.hpc.social/tags/InfiniBand" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InfiniBand</span></a> in <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> and <a href="https://mast.hpc.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> overall.</p><p>See <a href="https://mug.mvapich.cse.ohio-state.edu/" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">mug.mvapich.cse.ohio-state.edu</span><span class="invisible">/</span></a>.</p>
MinaFreeBSD vs cloud-init