again building off the same code, a matrix showing all possible halfway-interpolations between points in the letters (i.e., top row is (A+A)/2, (A+B)/2, (A+C)/2, etc.) (matrix is not symmetric because letters have different numbers of points)
hi-res here since mastodon downsampled it http://static.decontextualize.com/alphabet-lerp-matrix-2018-01-07.png
(has some random rotation on the points of the letterforms themselves and on where each segment is drawn beneath the previous)
I keep on posting examples that look the same, but this one was made with a completely different set of tools: opentype.js -> svg-mesh-3d -> p5.js (kind of a hassle to get working tbh but will hopefully be wayyy more versatile for these experiments moving forward) https://mastodon.social/media/Mwq4ZesNa2DTzAv2_Qo
@ultimape thanks for the pointers! i didn't know about two.js. ultimately i don't care about fast rendering, what i want is just intuitive access to the geometry in a way that makes it easy to play around with the glyphs procedurally... ultimately i don't think the mesh is what i want but all of the other ways of converting the shapes to xy coords make it difficult to render the glyphs as solid w/o filling in holes. probably i just need to get used to working with the path data itself 🤷🏻♀️
@ultimape thanks! i've been using licecap because it's easy for now. very familiar with node-canvas but these are just daily experiments, not trying to do anything high quality yet. just a note, i am not a beginner—i teach creative coding (with processing and p5js) at the college level and have written a book on the topic for maker media. i actually used to share an office with rune madsen (who made rune.js)!
feeling like I'm getting stuck on a particular set of ideas here, will try to shake it up soon
this one is difficult to explain but I think it actually has a lot of promise: visualizing a word's GloVe embedding by extending each point from the word center according to the value of each dimension of the embedding by dividing a circle into arcs according to the number of dimensions in the embedding https://mastodon.social/media/cZvGlIY_gC0nTg5NNOQ https://mastodon.social/media/I4dFbkuhiuqzrfEylcw (basically an adaptation of this http://static.decontextualize.com/vecviz/ but with the points of the font path instead of just a blob)
@enkiv2 yes, the words themselves are just n-dimensional vectors. the way that I drew them in that experiment was to find n equally spaced angles and then draw a line whose length corresponds to each dimension—a hack to make it easier to see the values for each dimension. how to make that play nicely with a 2d projection of the word vectors themselves as points is up to you (or whoever!)
@aparrish I'm curious how this would change with an embedding like Conceptnet (I'm intrigued that they edited out a lot of the gender and racial bias from the training data). Would we see big changes in traditionally-gendered words like housekeeper? I've been trying to think of a demo other than analogy for showing the effect of the de-biasing.
Follow friends and discover new ones. Publish anything you want: links, pictures, text, video. This server is run by the main developers of the Mastodon project. Everyone is welcome as long as you follow our code of conduct!