mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

334K
active users

#cvpr

0 posts0 participants0 posts today
OpenCV<p>The OpenCV Perception Challenge for Bin-Picking is in full-swing, with teams submitting solutions to the leaderboard. If you&#39;re in the robotics or AI space, this is a great way to test your skills, win a share of the $60,000 in prizes, and most excitingly be part of an official CVPR Workshop at CVPR 2025 in Nashville!</p><p>Sign up to participate: <a href="https://bpc.opencv.org/web/challenges/challenge-page/1/overview" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">bpc.opencv.org/web/challenges/</span><span class="invisible">challenge-page/1/overview</span></a><br />See the leaderboard: <a href="https://bpc.opencv.org/web/challenges/challenge-page/1/leaderboard/1" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">bpc.opencv.org/web/challenges/</span><span class="invisible">challenge-page/1/leaderboard/1</span></a></p><p><a href="https://mastodon.social/tags/OpenCV" class="mention hashtag" rel="tag">#<span>OpenCV</span></a> <a href="https://mastodon.social/tags/ComputerVision" class="mention hashtag" rel="tag">#<span>ComputerVision</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://mastodon.social/tags/BPC2025" class="mention hashtag" rel="tag">#<span>BPC2025</span></a> <a href="https://mastodon.social/tags/Robotics" class="mention hashtag" rel="tag">#<span>Robotics</span></a> <a href="https://mastodon.social/tags/Competition" class="mention hashtag" rel="tag">#<span>Competition</span></a> <a href="https://mastodon.social/tags/CVPR" class="mention hashtag" rel="tag">#<span>CVPR</span></a></p>
Adam Cook<p>One of the dumbest talks that I have ever heard was given by <a class="mention" href="https://bsky.app/profile/karpathy.bsky.social" rel="nofollow noopener" target="_blank">@karpathy.bsky.social</a> at <a href="https://bsky.app/search?q=%23CVPR" rel="nofollow noopener" target="_blank">#CVPR</a> when he still worked at <a href="https://bsky.app/search?q=%23Tesla" rel="nofollow noopener" target="_blank">#Tesla</a>. An absurd slide deck obviously written by Musk to justify Tesla dumping radar sensors some weeks either - which was only done because of COVID supply chain issues.</p>
José Oramas<p>Pleased to receive a relatively reduced review assignment (four papers) <a href="https://sigmoid.social/tags/CVPR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR</span></a> 2025. The assignment seems to be on point, thanks for the PCs and ACs for their efforts. <a href="https://sigmoid.social/tags/CV" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CV</span></a></p>
Adrien Foucart<p>So apparently <a href="https://social.sciences.re/tags/CVPR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR</span></a> (one of the top computer vision conference) thinks LLM are an acceptable tool to "perform background research" while reviewing a submission... everything is fine, I guess.</p><p>Also in their rules: "An author cannot submit more than 25 papers." What. Like, I get that a senior author may "significantly contribute" to a handful of papers at the same conf but surely not more than... 5? I can't imagine what they can contribute to 25 -- besides funding of course... <a href="https://social.sciences.re/tags/academia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>academia</span></a></p>
Waseem<p>Hey <a href="https://mastodon.social/tags/fediverse" class="mention hashtag" rel="tag">#<span>fediverse</span></a>, I want to connect to more people here.</p><p>I am a <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="tag">#<span>MachineLearning</span></a> engineer who has experience with <a href="https://mastodon.social/tags/python" class="mention hashtag" rel="tag">#<span>python</span></a>, <a href="https://mastodon.social/tags/golang" class="mention hashtag" rel="tag">#<span>golang</span></a>, some <a href="https://mastodon.social/tags/rust" class="mention hashtag" rel="tag">#<span>rust</span></a>, <a href="https://mastodon.social/tags/php" class="mention hashtag" rel="tag">#<span>php</span></a> and <a href="https://mastodon.social/tags/JavaScript" class="mention hashtag" rel="tag">#<span>JavaScript</span></a>. I also love research and have a paper published in <a href="https://mastodon.social/tags/CVPR" class="mention hashtag" rel="tag">#<span>CVPR</span></a>. Currently, I am using <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> for improving the safety on the roads. I&#39;ve worked as full-stack engineer in the past.</p><p>I find this platform and people here awesome. I&#39;ve had amazing interactions and want to grow them.</p><p><a href="https://mastodon.social/tags/introduction" class="mention hashtag" rel="tag">#<span>introduction</span></a> <a href="https://mastodon.social/tags/intro" class="mention hashtag" rel="tag">#<span>intro</span></a> <a href="https://mastodon.social/tags/tech" class="mention hashtag" rel="tag">#<span>tech</span></a> <a href="https://mastodon.social/tags/mastodon" class="mention hashtag" rel="tag">#<span>mastodon</span></a> <a href="https://mastodon.social/tags/activitypub" class="mention hashtag" rel="tag">#<span>activitypub</span></a> <a href="https://mastodon.social/tags/linux" class="mention hashtag" rel="tag">#<span>linux</span></a></p>
Oliver Geer<p>I don't think this creative writing is common research practice, but I definitely like it...</p><p>The paper's called "Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks", and it's from an actual IEEE journal. <a href="https://doi.org/10.1109/CVPR42600.2020.00932" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1109/CVPR42600.2020</span><span class="invisible">.00932</span></a></p><p><a href="https://floss.social/tags/academia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>academia</span></a> <a href="https://floss.social/tags/AcademicChatter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AcademicChatter</span></a> <a href="https://floss.social/tags/computerScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computerScience</span></a> <a href="https://floss.social/tags/neuralnetwork" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuralnetwork</span></a> <a href="https://floss.social/tags/ieee" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ieee</span></a> <a href="https://floss.social/tags/cvpr" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cvpr</span></a> <a href="https://floss.social/tags/cvpr2020" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cvpr2020</span></a> <a href="https://floss.social/tags/machineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machineLearning</span></a> <a href="https://floss.social/tags/machineUnlearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machineUnlearning</span></a></p>
Phil Nelson<p>Some Gameboy Camera pics from CVPR 2024 in Seattle <a href="https://wrestling.social/tags/CVPR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR</span></a> <a href="https://wrestling.social/tags/gameboycamera" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gameboycamera</span></a></p>
Eryk Salvaggio<p>Great news: “Because of You” claimed one of 3 top art prizes at the Conference for Computer Vision and Pattern Recognition, <a href="https://assemblag.es/tags/CVPR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR</span></a>. It’s a video art piece, created with Avijit Ghosh, PhD, exploring parallels between the story of Henrietta Lacks with datafication and the erasure of identity in AI. </p><p>Thank you to Luba Elliott for recognizing the work — and congrats to the other winners (and creators of all 115 works!). </p><p>You can read about &amp; see the piece here: <a href="https://thecvf-art.com/project/eryk-salvaggio-avijit-ghosh/" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thecvf-art.com/project/eryk-sa</span><span class="invisible">lvaggio-avijit-ghosh/</span></a><br>➡️</p>
Fraunhofer SIT<p>Raphael Antonius Frick präsentierte auf der diesjährigen <a href="https://wisskomm.social/tags/CVPR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR</span></a> 2024 im Rahmen des <a href="https://wisskomm.social/tags/DFAD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DFAD</span></a> Workshops ein Paper zur Erkennung von Diffusions-basiertem Inpainting. Das Modell kann lokale <a href="https://wisskomm.social/tags/Bildmanipulationen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bildmanipulationen</span></a>, die durch Text-to-Image-Generatoren erstellt wurden, mit hoher Präzision identifizieren, selbst wenn diese stark komprimiert wurden oder durch ungesehene Generatoren wie dem kürzlich veröffentlichten <a href="https://wisskomm.social/tags/Adobe" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Adobe</span></a> Firefly 3 erzeugt wurden.<br>Das <a href="https://wisskomm.social/tags/Paper" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paper</span></a> ist hier zu finden:<br><a href="https://openaccess.thecvf.com/content/CVPR2024W/DFAD/papers/Frick_DiffSeg_Towards_Detecting_Diffusion-Based_Inpainting_Attacks_Using_Multi-Feature_Segmentation_CVPRW_2024_paper.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">openaccess.thecvf.com/content/</span><span class="invisible">CVPR2024W/DFAD/papers/Frick_DiffSeg_Towards_Detecting_Diffusion-Based_Inpainting_Attacks_Using_Multi-Feature_Segmentation_CVPRW_2024_paper.pdf</span></a></p>
Fraunhofer SIT<p>Raphael Antonius Frick präsentierte auf der diesjährigen <a href="https://wisskomm.social/tags/CVPR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR</span></a> 2024 im Rahmen des <a href="https://wisskomm.social/tags/DFAD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DFAD</span></a> Workshops ein Paper zur Erkennung von Diffusions-basiertem Inpainting. Das Modell kann lokale <a href="https://wisskomm.social/tags/Bildmanipulationen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bildmanipulationen</span></a>, die durch Text-to-Image-Generatoren erstellt wurden, mit hoher Präzision identifizieren, selbst wenn diese stark komprimiert wurden oder durch ungesehene Generatoren wie dem kürzlich veröffentlichten <a href="https://wisskomm.social/tags/Adobe" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Adobe</span></a> Firefly 3 erzeugt wurden.<br>Das <a href="https://wisskomm.social/tags/Paper" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paper</span></a> ist hier zu finden:<br><a href="https://openaccess.thecvf.com/content/CVPR2024W/DFAD/papers/Frick_DiffSeg_Towards_Detecting_Diffusion-Based_Inpainting_Attacks_Using_Multi-Feature_Segmentation_CVPRW_2024_paper.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">openaccess.thecvf.com/content/</span><span class="invisible">CVPR2024W/DFAD/papers/Frick_DiffSeg_Towards_Detecting_Diffusion-Based_Inpainting_Attacks_Using_Multi-Feature_Segmentation_CVPRW_2024_paper.pdf</span></a></p>
OpenCV<p>On our next episode we welcome back multi-time guests Paula Ramos and Samet Akcay to tell us what the OpenVINO team have in store for The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024. <a href="https://mastodon.social/tags/CVPR" class="mention hashtag" rel="tag">#<span>CVPR</span></a> <a href="https://mastodon.social/tags/OpenCV" class="mention hashtag" rel="tag">#<span>OpenCV</span></a> <a href="https://mastodon.social/tags/OpenVINO" class="mention hashtag" rel="tag">#<span>OpenVINO</span></a></p><p>Watch on LinkedIn: <a href="https://www.linkedin.com/events/7184641695936901120" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">linkedin.com/events/7184641695</span><span class="invisible">936901120</span></a><br />or YouTube: <a href="https://youtube.com/live/vwUB64GbUNA" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">youtube.com/live/vwUB64GbUNA</span><span class="invisible"></span></a></p>
Andrew Hundt<p>Thanks to the SCoFT team for all the great work that got our paper into <a href="https://mastodon.social/tags/CVPR" class="mention hashtag" rel="tag">#<span>CVPR</span></a> !! 🎉🎊</p><p>Authors: Zhixuan Liu, Peter Schaldenbrand, Beverley-Claire Okogwu, Wenxuan Peng, Youngsik Yun, Andrew Hundt, Jihie Kim, Jean Oh.</p><p>Our paper is:<br />SCoFT: Self-Contrastive Fine-Tuning for Equitable Image Generation</p><p>The ArXiV paper link is below with all the details, and we expect to make code available soon!</p><p>Thanks for reading!</p><p><a href="https://mastodon.social/tags/CVPR" class="mention hashtag" rel="tag">#<span>CVPR</span></a> <a href="https://mastodon.social/tags/CVPR2024" class="mention hashtag" rel="tag">#<span>CVPR2024</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://mastodon.social/tags/ML" class="mention hashtag" rel="tag">#<span>ML</span></a> <a href="https://mastodon.social/tags/SCoFT" class="mention hashtag" rel="tag">#<span>SCoFT</span></a></p><p><a href="https://arxiv.org/abs/2401.08053" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/abs/2401.08053</span><span class="invisible"></span></a></p><p>19/n</p>
Andrew Hundt<p>SCoFT has been accepted at <a href="https://mastodon.social/tags/CVPR" class="mention hashtag" rel="tag">#<span>CVPR</span></a>! 🎉🎊 </p><p>Remember Google Gemini’s biased medieval England generated images that were just everywhere? Ancient internet history, I know. </p><p>I&#39;ve been chomping at the bit bc we&#39;ve had methods for more culturally sensitive image generation under review that help to address the kinds of issues we saw with Gemini!</p><p>SCoFT: Self-Contrastive Fine-Tuning for Equitable Image Generation</p><p><a href="https://mastodon.social/tags/CVPR2024" class="mention hashtag" rel="tag">#<span>CVPR2024</span></a></p><p><a href="https://arxiv.org/abs/2401.08053" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/abs/2401.08053</span><span class="invisible"></span></a></p><p>🧵 1/n</p>
Urs Waldmann<p>Send in your favorite animal papers to our CV4Animals workshop <a href="https://sigmoid.social/tags/CVPR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR</span></a> by March 27!</p><p>Look forward to the exciting program and hope to see many animal enthusiasts in Seattle <a href="https://sigmoid.social/tags/CVPR2024" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR2024</span></a>!</p><p><a href="https://www.cv4animals.com/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">cv4animals.com/</span><span class="invisible"></span></a></p><p><span class="h-card" translate="no"><a href="https://xn--baw-joa.social/@unikonstanz" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>unikonstanz</span></a></span> <a href="https://sigmoid.social/tags/CBehav" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CBehav</span></a> <a href="https://sigmoid.social/tags/cv4animals" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cv4animals</span></a> <a href="https://sigmoid.social/tags/computervision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computervision</span></a></p>
Urs Waldmann<p>The 4th CV4Animals workshop will take place at <a href="https://sigmoid.social/tags/CVPR2024" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR2024</span></a> in Seattle!</p><p><a href="https://www.cv4animals.com/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">cv4animals.com/</span><span class="invisible"></span></a> </p><p>We invite submissions in 2 tracks:<br>- short 4-page unpublished work (potential invitation to IJCV Special Issue)<br>- published work<br>Deadline: March 27, 2024</p><p><a href="https://sigmoid.social/tags/CVPR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR</span></a> <a href="https://sigmoid.social/tags/CVPR24" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR24</span></a> <a href="https://sigmoid.social/tags/cv4animals" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cv4animals</span></a> <span class="h-card" translate="no"><a href="https://xn--baw-joa.social/@unikonstanz" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>unikonstanz</span></a></span> <a href="https://sigmoid.social/tags/CBehav" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CBehav</span></a> <a href="https://sigmoid.social/tags/computervision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computervision</span></a></p>
Simeon Schmauß<p>There is also a recording of the presentation at <a href="https://fosstodon.org/tags/CVPR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CVPR</span></a><br><a href="https://youtu.be/9JpGjpITiDM?si=97SN2UCZhMJ1pv_3&amp;t=8377" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">youtu.be/9JpGjpITiDM?si=97SN2U</span><span class="invisible">CZhMJ1pv_3&amp;t=8377</span></a></p>
gtbarry<p>Pixel Codec Avatars</p><p>PiCA improves reconstruction over existing techniques across expressions and views of different gender and skin tone. The PiCA model is much smaller than the state-of-art baseline model, and makes multi-person telecommunication possible: on a single Oculus Quest 2 mobile VR headset, 5 avatars are rendered in realtime in the same scene</p><p><a href="https://mastodon.social/tags/facebook" class="mention hashtag" rel="tag">#<span>facebook</span></a> <a href="https://mastodon.social/tags/meta" class="mention hashtag" rel="tag">#<span>meta</span></a> <a href="https://mastodon.social/tags/realitylabs" class="mention hashtag" rel="tag">#<span>realitylabs</span></a> <a href="https://mastodon.social/tags/oculus" class="mention hashtag" rel="tag">#<span>oculus</span></a> <a href="https://mastodon.social/tags/Pica" class="mention hashtag" rel="tag">#<span>Pica</span></a> <a href="https://mastodon.social/tags/CVPR" class="mention hashtag" rel="tag">#<span>CVPR</span></a> <a href="https://mastodon.social/tags/avatar" class="mention hashtag" rel="tag">#<span>avatar</span></a> <a href="https://mastodon.social/tags/communications" class="mention hashtag" rel="tag">#<span>communications</span></a> <a href="https://mastodon.social/tags/virtualreality" class="mention hashtag" rel="tag">#<span>virtualreality</span></a> <a href="https://mastodon.social/tags/VR" class="mention hashtag" rel="tag">#<span>VR</span></a> <a href="https://mastodon.social/tags/technology" class="mention hashtag" rel="tag">#<span>technology</span></a> <a href="https://mastodon.social/tags/tech" class="mention hashtag" rel="tag">#<span>tech</span></a></p><p><a href="https://research.facebook.com/publications/pixel-codec-avatars/" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">research.facebook.com/publicat</span><span class="invisible">ions/pixel-codec-avatars/</span></a></p>
Rerun<p>It’s easy and insightful to use Rerun to understand core concepts of new research! You can find a deep dive into all the papers we have done so far on our new examples page (link below).</p><p>What papers should we look at next?</p><p>You can find all of the paper visualizations on our new example page: <br /><a href="https://www.rerun.io/examples/paper-visualizations" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">rerun.io/examples/paper-visual</span><span class="invisible">izations</span></a></p><p><a href="https://mastodon.social/tags/computervision" class="mention hashtag" rel="tag">#<span>computervision</span></a> <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="tag">#<span>ai</span></a> <a href="https://mastodon.social/tags/cvpr" class="mention hashtag" rel="tag">#<span>cvpr</span></a></p>
Lili<p>What is the latest research in quantifying animal movement and behavior?</p><p>I wrote down a detailed overview here, based on what I saw at CVPR this year:<br><a href="https://writings.lambdaloop.com/posts/cv4animals-2023/" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">writings.lambdaloop.com/posts/</span><span class="invisible">cv4animals-2023/</span></a></p><p><a href="https://synapse.cafe/tags/PoseEstimation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PoseEstimation</span></a> <a href="https://synapse.cafe/tags/animals" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>animals</span></a> <a href="https://synapse.cafe/tags/computervision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computervision</span></a> <a href="https://synapse.cafe/tags/cvpr" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cvpr</span></a> <a href="https://synapse.cafe/tags/cvpr2023" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cvpr2023</span></a> <a href="https://synapse.cafe/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a></p>
Adam Cook<p>The Verge video cites <a href="https://mastodon.social/tags/Musk" class="mention hashtag" rel="tag">#<span>Musk</span></a>&#39;s desire, in part, in removing certain sensor hardware to cut costs and or to alleviate supply chain issues.</p><p>**However**, that argument stands in contrast to what then-Autopilot Director, Andrej <a href="https://mastodon.social/tags/Karpathy" class="mention hashtag" rel="tag">#<span>Karpathy</span></a> stated in the attached <a href="https://mastodon.social/tags/CVPR" class="mention hashtag" rel="tag">#<span>CVPR</span></a> presentation: <a href="https://www.youtube.com/watch?v=eOL_rCK59ZI&amp;t=28293s" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">youtube.com/watch?v=eOL_rCK59Z</span><span class="invisible">I&amp;t=28293s</span></a></p><p>Andrej&#39;s argument was, essentially, that &quot;camera-only&quot; based computer vision yielded a &quot;richer dataset&quot; than when Tesla also embraced Radar sensors in their vehicles.</p>