mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

279K
active users

#MathematicalPsychology

0 posts0 participants0 posts today
Ross Gayler<p>The next VSAonline webinar is at 17:00 UTC (not the usual time), Monday 27 January.</p><p>Zoom: <a href="https://ltu-se.zoom.us/j/65564790287" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ltu-se.zoom.us/j/65564790287</span><span class="invisible"></span></a> </p><p>WEB: <a href="https://bit.ly/vsaonline" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">bit.ly/vsaonline</span><span class="invisible"></span></a> </p><p>Speaker: Anthony Thomas from UC Davis, USA</p><p>Title: ”Sketching a Picture of Vector Symbolic Architectures”</p><p>Abstract : Sketching algorithms are a broad area of research in theoretical computer science and numerical analysis that aim to distil data into a simple summary, called a "sketch," that retains some essential notion of structure while being much more efficient to store, query, and transmit.</p><p>Vector-symbolic architectures (VSAs) are an approach to computing on data represented using random vectors, and provide an elegant conceptual framework for realizing a wide variety of data structures and algorithms in a way that lends itself to implementation in highly-parallel and energy-efficient computer hardware.</p><p>Sketching algorithms and VSA have a substantial degree of consonance in their methods, motivations, and applications. In this tutorial style talk, I will discuss some of the connections between these two fields, focusing, in particular, on the connections between VSA and tensor-sketches, a family of sketching algorithms concerned with the setting in which the data being sketched can be decomposed into Kronecker (tensor) products between more primitive objects. This is exactly the situation of interest in VSA and the two fields have arrived at strikingly similar solutions to this problem.</p><p><a href="https://aus.social/tags/VectorSymbolicArchitectures" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VectorSymbolicArchitectures</span></a> <a href="https://aus.social/tags/VSA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VSA</span></a> <a href="https://aus.social/tags/HyperdimensionalComputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HyperdimensionalComputing</span></a> <a href="https://aus.social/tags/HDC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HDC</span></a> <a href="https://aus.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://aus.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://aus.social/tags/ComputationalCognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputationalCognitiveScience</span></a> <a href="https://aus.social/tags/CompCogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CompCogSci</span></a> <a href="https://aus.social/tags/MathematicalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathematicalPsychology</span></a> <a href="https://aus.social/tags/MathPsych" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathPsych</span></a> <a href="https://aus.social/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> <a href="https://aus.social/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/cogsci" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>cogsci</span></a></span></p>
Ross Gayler<p>The schedule for the next VSAonline webinar series (January to June 2025) is published at:</p><p><a href="https://sites.google.com/view/hdvsaonline/spring-2025" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">sites.google.com/view/hdvsaonl</span><span class="invisible">ine/spring-2025</span></a></p><p>There are 11 talks around <a href="https://aus.social/tags/VectorSymbolicArchitecture" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VectorSymbolicArchitecture</span></a> / <a href="https://aus.social/tags/HyperdimensionalComputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HyperdimensionalComputing</span></a> </p><p>The talks are (almost always) recorded and published online, in case you can't participate in the live session.</p><p><span class="h-card" translate="no"><a href="https://a.gup.pe/u/cogsci" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>cogsci</span></a></span> <br><a href="https://aus.social/tags/VSA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VSA</span></a> <a href="https://aus.social/tags/HDC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HDC</span></a> <a href="https://aus.social/tags/CompCogScii" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CompCogScii</span></a> <a href="https://aus.social/tags/MathPsych" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathPsych</span></a> <a href="https://aus.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://aus.social/tags/neuromorphic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuromorphic</span></a> <a href="https://aus.social/tags/neurosymbolic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neurosymbolic</span></a> <a href="https://aus.social/tags/ComputationalNeuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputationalNeuroscience</span></a> <a href="https://aus.social/tags/ComputationalCognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputationalCognitiveScience</span></a> <a href="https://aus.social/tags/MathematicalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathematicalPsychology</span></a></p>
Ross Gayler<p><span class="h-card" translate="no"><a href="https://mstdn.science/@cian" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>cian</span></a></span> <br>If (a big if) we performed generalisation at retrieval (rather than at storage, as in almost all current artificial neural networks) then the episodic memories would be the essential input to the generalisation (and inference) process. You are best placed to know what dimensions to abstract over when you have a specific current task and goal to drive generalisation and inference.</p><p>(Of course, having arrived at some specific generalisation from the current retrieval, that generalisation might be stored as part of the current episodic memory and be available to guide future generalisations on retrieval.)</p><p>What are the implications if episodic memory is the primary form of memory and other (declarative/procedural/etc) memories are epiphenomena arising out of the episodic memories? </p><p><a href="https://aus.social/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <a href="https://aus.social/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> <a href="https://aus.social/tags/MathPsych" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathPsych</span></a> <a href="https://aus.social/tags/MathematicalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathematicalPsychology</span></a> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/cogsci" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>cogsci</span></a></span></p>
Ross Gayler<p>Maths/CogSci/MathPsych lazyweb: Are there any algebras in which you have subtraction but don't have negative values? Pointers appreciated. I am hoping that the abstract maths might shed some light on a problem in cognitive modelling.</p><p>The context is that I am interested in formal models of cognitive representations and I want to represent things (e.g. cats), don't believe that we should be able to represent negated things (i.e. I don't think it should be able to represent anti-cats), but it makes sense to subtract representations (e.g. remove the representation of a cat from the representation of a cat and a dog, leaving only the representation of the dog).</p><p>This *might* also be related to non-negative factorisation: (<a href="https://en.wikipedia.org/wiki/Non-negative_matrix_factorization" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Non-nega</span><span class="invisible">tive_matrix_factorization</span></a>) in that we want to represent a situation in terms of parts and don't allow anti-parts.</p><p><a href="https://aus.social/tags/mathematics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mathematics</span></a> <a href="https://aus.social/tags/algebra" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>algebra</span></a> <a href="https://aus.social/tags/AbstractAlgebra" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AbstractAlgebra</span></a> <a href="https://aus.social/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/cogsci" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>cogsci</span></a></span> <a href="https://aus.social/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> <a href="https://aus.social/tags/MathPsych" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathPsych</span></a> <a href="https://aus.social/tags/MathematicalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathematicalPsychology</span></a></p>
Ross Gayler<p>Most of the Artificial Neural Net simulation research I have seen (say, at venues like NeurIPS) seems to take a *very* simple conceptual approach to analysis of simulation results - just treat everything as independent observations with fixed effects conditions, when it might be better conceptualised as random effects and repeated measures. Do other people think this? Does anyone have views on whether it would be worthwhile doing more complex analyses and whether the typical publication venues would accept those more complex analyses? Are there any guides to appropriate analyses for simulation results, e.g what to do with the results coming from multi-fold cross-validation (I presume the results are not independent across folds because they share cases).</p><p><span class="h-card" translate="no"><a href="https://a.gup.pe/u/cogsci" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>cogsci</span></a></span> <a href="https://aus.social/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <a href="https://aus.social/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> <a href="https://aus.social/tags/MathPsych" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathPsych</span></a> <a href="https://aus.social/tags/MathematicalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathematicalPsychology</span></a> <a href="https://aus.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://aus.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a></p>
Ross Gayler<p>Thanks <span class="h-card"><a href="https://techhub.social/@hosford42" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>hosford42</span></a></span> for reminding me of this half-day tutorial on Vector Symbolic Architectures / Hyperdimensional Computing. The authors have been applying HDC/VSA to place recognition in robotics, but the tutorial coverage is much wider.</p><p><a href="https://www.tu-chemnitz.de/etit/proaut/workshops_tutorials/hdc_ki19/index.html" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">tu-chemnitz.de/etit/proaut/wor</span><span class="invisible">kshops_tutorials/hdc_ki19/index.html</span></a></p><p><a href="https://aus.social/tags/VSA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VSA</span></a> <a href="https://aus.social/tags/VectorSymbolicArchitecture" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VectorSymbolicArchitecture</span></a> <a href="https://aus.social/tags/HDC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HDC</span></a> <a href="https://aus.social/tags/HyperdimensionalComputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HyperdimensionalComputing</span></a> <a href="https://aus.social/tags/CogRob" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogRob</span></a> <a href="https://aus.social/tags/CognitiveRobotics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveRobotics</span></a> <a href="https://aus.social/tags/CompCogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CompCogSci</span></a> <a href="https://aus.social/tags/ComputationalCognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputationalCognitiveScience</span></a> <a href="https://aus.social/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <a href="https://aus.social/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> <a href="https://aus.social/tags/MathPsych" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathPsych</span></a> <a href="https://aus.social/tags/MathematicalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathematicalPsychology</span></a></p>
Ross Gayler<p>Here are two high level articles that mention Vector Symbolic Architecure / HyperDimensional Computing in the more "popular" end of the technical press:</p><p><a href="https://www.quantamagazine.org/a-new-approach-to-computation-reimagines-artificial-intelligence-20230413/" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">quantamagazine.org/a-new-appro</span><span class="invisible">ach-to-computation-reimagines-artificial-intelligence-20230413/</span></a></p><p><a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10098176" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ieeexplore.ieee.org/stamp/stam</span><span class="invisible">p.jsp?arnumber=10098176</span></a></p><p><a href="https://aus.social/tags/VSA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VSA</span></a> <a href="https://aus.social/tags/VectorSymbolicArchitecture" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VectorSymbolicArchitecture</span></a> <a href="https://aus.social/tags/HDC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HDC</span></a> <a href="https://aus.social/tags/HyperdimensionalComputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HyperdimensionalComputing</span></a> <a href="https://aus.social/tags/CompCogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CompCogSci</span></a> <a href="https://aus.social/tags/ComputationalCognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputationalCognitiveScience</span></a> <a href="https://aus.social/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <a href="https://aus.social/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> <a href="https://aus.social/tags/MathPsych" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathPsych</span></a> <a href="https://aus.social/tags/MathematicalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MathematicalPsychology</span></a></p>
Ross GaylerCognitive Science Society annual conference, Sydney, July 2023
Ross GaylerAustralasian Mathematical Psychology Conference - reminder: CFP & registration
Daniel Heck<p><span class="h-card" translate="no"><a href="https://mastodon.social/@kaitclark" class="u-url mention">@<span>kaitclark</span></a></span> <span class="h-card" translate="no"><a href="https://social.tchncs.de/@perspektivbrocken" class="u-url mention">@<span>perspektivbrocken</span></a></span> <br />Thanks for starting the list!<br />Can you please add me with the keywords <a href="https://mastodon.social/tags/Psychometrics" class="mention hashtag" rel="tag">#<span>Psychometrics</span></a> <a href="https://mastodon.social/tags/CognitiveModeling" class="mention hashtag" rel="tag">#<span>CognitiveModeling</span></a> <a href="https://mastodon.social/tags/MathematicalPsychology" class="mention hashtag" rel="tag">#<span>MathematicalPsychology</span></a></p>