mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

340K
active users

#sycl

0 posts0 participants0 posts today
Dr. Moritz Lehmann<p>What an honor to start the&nbsp;<a href="https://mast.hpc.social/tags/IWOCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IWOCL</span></a>&nbsp;conference with my keynote talk! Nowhere else you get to talk to so many <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a>&nbsp;and&nbsp;<a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a>&nbsp;experts in one room! I shared some updates on my <a href="https://mast.hpc.social/tags/FluidX3D" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FluidX3D</span></a>&nbsp;<a href="https://mast.hpc.social/tags/CFD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CFD</span></a>&nbsp;solver, how I optimized it at the smallest level of a single grid cell, to scale it up on the largest <a href="https://mast.hpc.social/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a>&nbsp;<a href="https://mast.hpc.social/tags/Xeon6" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Xeon6</span></a>&nbsp;<a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a>&nbsp;systems that provide more memory capacity than any <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a>&nbsp;server. 🖖😃</p>
Dr. Moritz Lehmann<p>Just arrived in wonderful Heidelberg, looking forward to present the keynote talk at <a href="https://mast.hpc.social/tags/IWOCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IWOCL</span></a> tomorrow!! See you there! 🖖😁<br><a href="https://www.iwocl.org/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">iwocl.org/</span><span class="invisible"></span></a> <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a> <a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/FluidX3D" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FluidX3D</span></a> <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a></p>
Amartya<p>My brain is absolutely fried. <br>Today is the last day of coursework submissions for this semester. What a hectic month. <br>DNN with PyTorch, Brain model parallelisation with MPI, SYCL and OpenMP offloading of percolation models,hand optimizing serial codes for performance.<br>Two submissions due today. Submitted one and finalising my report for the second one. <br>Definitely having a pint after this</p><p><a href="https://fosstodon.org/tags/sycl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sycl</span></a> <a href="https://fosstodon.org/tags/hpc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hpc</span></a> <a href="https://fosstodon.org/tags/msc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>msc</span></a> <a href="https://fosstodon.org/tags/epcc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>epcc</span></a> <a href="https://fosstodon.org/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://fosstodon.org/tags/pytorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pytorch</span></a> <a href="https://fosstodon.org/tags/mpi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mpi</span></a> <a href="https://fosstodon.org/tags/openmp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openmp</span></a> <a href="https://fosstodon.org/tags/hectic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hectic</span></a> <a href="https://fosstodon.org/tags/programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>programming</span></a> <a href="https://fosstodon.org/tags/parallelprogramming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>parallelprogramming</span></a> <a href="https://fosstodon.org/tags/latex" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>latex</span></a></p>
Amartya<p>Started SYCL this semester in my MSc, and I have a coursework on it. <br>I have never been more frustrated in my life. <br>I am not saying SYCL is bad. I might just be too dumb to master it in a sem in order to port an existing CPU code to use MPI &amp; SYCL together.<br>CUDA was much easier for me for the same task.</p><p><a href="https://fosstodon.org/tags/sycl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sycl</span></a> <a href="https://fosstodon.org/tags/hpc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hpc</span></a> <a href="https://fosstodon.org/tags/parallelprogramming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>parallelprogramming</span></a> <a href="https://fosstodon.org/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> <a href="https://fosstodon.org/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://fosstodon.org/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://fosstodon.org/tags/msc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>msc</span></a> <a href="https://fosstodon.org/tags/scientificcomputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>scientificcomputing</span></a> <a href="https://fosstodon.org/tags/amd" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>amd</span></a> <a href="https://fosstodon.org/tags/mpi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mpi</span></a> <a href="https://fosstodon.org/tags/epcc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>epcc</span></a></p>
pafurijaz<p>It seems that <a href="https://mastodon.social/tags/Vulkan" class="mention hashtag" rel="tag">#<span>Vulkan</span></a> could be the real alternative for using <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> on GPUs or CPUs of any brand, without necessarily having to rely on <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="tag">#<span>CUDA</span></a> or <a href="https://mastodon.social/tags/AMD" class="mention hashtag" rel="tag">#<span>AMD</span></a>&#39;s <a href="https://mastodon.social/tags/ROCm" class="mention hashtag" rel="tag">#<span>ROCm</span></a>. I thought <a href="https://mastodon.social/tags/SYCL" class="mention hashtag" rel="tag">#<span>SYCL</span></a> was the alternative. This might finally free us from of monopoly <a href="https://mastodon.social/tags/Nvidia" class="mention hashtag" rel="tag">#<span>Nvidia</span></a>.<br /><a href="https://mastodon.social/tags/Khronos" class="mention hashtag" rel="tag">#<span>Khronos</span></a></p>
ct<p>Managed to get an <a href="https://mastodon.content.town/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a> Arc A750 <a href="https://mastodon.content.town/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> running on <a href="https://mastodon.content.town/tags/risc_v" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>risc_v</span></a> using <a href="https://mastodon.content.town/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a>, <a href="https://mastodon.content.town/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a>, and <a href="https://mastodon.content.town/tags/AdaptiveCpp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AdaptiveCpp</span></a>. Software PR's submitted for review.</p><p><a href="https://mastodon.content.town/tags/hpc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hpc</span></a> <a href="https://mastodon.content.town/tags/supercomputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>supercomputing</span></a></p><p><span class="h-card" translate="no"><a href="https://noc.social/@risc_v" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>risc_v</span></a></span></p>
HGPU group<p>The Shamrock code: I- Smoothed Particle Hydrodynamics on GPUs</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> <a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/PTX" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PTX</span></a> <a href="https://mast.hpc.social/tags/OpenMP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenMP</span></a> <a href="https://mast.hpc.social/tags/MPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MPI</span></a> <a href="https://mast.hpc.social/tags/Astrophysics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Astrophysics</span></a> <a href="https://mast.hpc.social/tags/Physics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Physics</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29827" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29827</span><span class="invisible"></span></a></p>
HGPU group<p>ML-Triton, A Multi-Level Compilation and Language Extension to Triton GPU Programming</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mast.hpc.social/tags/Triton" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Triton</span></a> <a href="https://mast.hpc.social/tags/Compilers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compilers</span></a> <a href="https://mast.hpc.social/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a></p><p><a href="https://hgpu.org/?p=29825" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29825</span><span class="invisible"></span></a></p>
HGPU group<p>Concurrent Scheduling of High-Level Parallel Programs on Multi-GPU Systems</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/TaskScheduling" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TaskScheduling</span></a> <a href="https://mast.hpc.social/tags/PerformancePortability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PerformancePortability</span></a> <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29823" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29823</span><span class="invisible"></span></a></p>
Giuseppe Bilotta<p>Even now, Thrust as a dependency is one of the main reason why we have a <a href="https://fediscience.org/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> backend, a <a href="https://fediscience.org/tags/HIP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HIP</span></a> / <a href="https://fediscience.org/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> backend and a pure <a href="https://fediscience.org/tags/CPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CPU</span></a> backend in <a href="https://fediscience.org/tags/GPUSPH" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPUSPH</span></a>, but not a <a href="https://fediscience.org/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> or <a href="https://fediscience.org/tags/OneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OneAPI</span></a> backend (which would allow us to extend hardware support to <a href="https://fediscience.org/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a> GPUs). &lt;<a href="https://doi.org/10.1002/cpe.8313" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">doi.org/10.1002/cpe.8313</span><span class="invisible"></span></a>&gt;</p><p>This is also one of the reason why we implemented our own <a href="https://fediscience.org/tags/BLAS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BLAS</span></a> routines when we introduced the semi-implicit integrator. A side-effect of this choice is that it allowed us to develop the improved <a href="https://fediscience.org/tags/BiCGSTAB" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BiCGSTAB</span></a> that I've had the opportunity to mention before &lt;<a href="https://doi.org/10.1016/j.jcp.2022.111413" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1016/j.jcp.2022.111</span><span class="invisible">413</span></a>&gt;. Sometimes I do wonder if it would be appropriate to “excorporate” it into its own library for general use, since it's something that would benefit others. OTOH, this one was developed specifically for GPUSPH and it's tightly integrated with the rest of it (including its support for multi-GPU), and refactoring to turn it into a library like cuBLAS is</p><p>a. too much effort<br>b. probably not worth it.</p><p>Again, following <span class="h-card" translate="no"><a href="https://peoplemaking.games/@eniko" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>eniko</span></a></span>'s original thread, it's really not that hard to roll your own, and probably less time consuming than trying to wrangle your way through an API that may or may not fit your needs.</p><p>6/</p>
Giuseppe Bilotta<p>I'm getting the material ready for my upcoming <a href="https://fediscience.org/tags/GPGPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPGPU</span></a> course that starts on March. Even though I most probably won't get to it,I also checked my trivial <a href="https://fediscience.org/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> programs. Apparently the 2025.0 version of the <a href="https://fediscience.org/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a> <a href="https://fediscience.org/tags/OneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OneAPI</span></a> <a href="https://fediscience.org/tags/DPCPP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DPCPP</span></a> runtime doesn't like any <a href="https://fediscience.org/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a> platform except Intel's own (I have two other platforms that support <a href="https://fediscience.org/tags/SPIRV" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SPIRV</span></a>, so why aren't they showing up? From the documentation I can find online this should be sufficient, but apparently it's not&nbsp;…)</p>
HGPU group<p>CPU-GPU co-execution through the exploitation of hybrid technologies via SYCL</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a> <a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/LLVM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLVM</span></a> <a href="https://mast.hpc.social/tags/PerformancePortability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PerformancePortability</span></a> <a href="https://mast.hpc.social/tags/LoadBalancing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LoadBalancing</span></a> <a href="https://mast.hpc.social/tags/HybridComputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HybridComputing</span></a></p><p><a href="https://hgpu.org/?p=29717" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29717</span><span class="invisible"></span></a></p>
HGPU group<p>Exploring data flow design and vectorization with oneAPI for streaming applications on CPU+GPU</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29705" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29705</span><span class="invisible"></span></a></p>
Khronos Group<p>HiPEAC 2025 kicks off next week and we're excited to be featured in two great sessions on Safety Critical and developing highly parallel applications. If you're attending HiPEAC in Barcelona, be sure to come and see us! </p><p>More information in this sessions: <a href="https://www.khronos.org/events/hipeac-2025" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">khronos.org/events/hipeac-2025</span><span class="invisible"></span></a><br><a href="https://fosstodon.org/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://fosstodon.org/tags/opencl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opencl</span></a> <a href="https://fosstodon.org/tags/vulkan" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vulkan</span></a></p>
Tom Deakin<p>We're used to leaning on children's books in Computer Science - with Gulliver's big-endian vs little-endian. Back at Supercomputing hashtag#SC24, I spoke at the hashtag#Intel booth all about open standards, performance portability, and the journey up the Yellow Brick Road to see the Wizard of Oz. Check out the video of the talk on YouTube:<br><a href="https://youtu.be/xO8FGAOScpo?si=_BnVilvTBa0Ns6dX" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">youtu.be/xO8FGAOScpo?si=_BnVil</span><span class="invisible">vTBa0Ns6dX</span></a><br><a href="https://mast.hpc.social/tags/performanceportability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>performanceportability</span></a> <a href="https://mast.hpc.social/tags/OpenMP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenMP</span></a> <a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a></p>
HGPU group<p>Analyzing the Performance Portability of SYCL across CPUs, GPUs, and Hybrid Systems with Protein Database Search</p><p><a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/oneAPI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oneAPI</span></a> <a href="https://mast.hpc.social/tags/Bioinformatics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bioinformatics</span></a> <a href="https://mast.hpc.social/tags/Databases" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Databases</span></a> <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://mast.hpc.social/tags/PerformancePortability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PerformancePortability</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29596" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29596</span><span class="invisible"></span></a></p>
HGPU group<p>Performance portability via C++ PSTL, SYCL, OpenMP, and HIP: the Gaia AVU-GSR case study</p><p><a href="https://mast.hpc.social/tags/HIP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HIP</span></a> <a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://mast.hpc.social/tags/OpenMP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenMP</span></a> <a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/PerformancePortability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PerformancePortability</span></a> <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://mast.hpc.social/tags/Astrophysics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Astrophysics</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29555" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29555</span><span class="invisible"></span></a></p>
Tom Deakin<p>Birds of a feather <a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> together. Join us now in B212! <a href="https://mast.hpc.social/tags/SC24" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SC24</span></a></p>
Khronos Group<p>This Tuesday at Supercomputing, ANARI will have a BOF in Room B209 starting at 12:15pm.</p><p>Or join the SYCL BOF just down the hall in room B212. We hope to see you there.</p><p>More info: <a href="https://www.khronos.org/events/supercomputing-2024" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">khronos.org/events/supercomput</span><span class="invisible">ing-2024</span></a><br><a href="https://fosstodon.org/tags/sc24" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sc24</span></a> <a href="https://fosstodon.org/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a> <a href="https://fosstodon.org/tags/ANARI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ANARI</span></a></p>
Tom Deakin<p>This year is my 10th Supercomputing (2 online through pandemic). This also means I’ve been helping teach parallel programming models in the SC Tutorial program for a decade. Let’s not forget parallel programming is hard, however we write it down! <a href="https://mast.hpc.social/tags/SC24" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SC24</span></a> <a href="https://mast.hpc.social/tags/OpenMP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenMP</span></a> <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a> <a href="https://mast.hpc.social/tags/Cpp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cpp</span></a> <a href="https://mast.hpc.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SYCL</span></a></p>