in the same vein more or less, hmm. probably not that much more of interest can be done with just having a list of x, y coordinates https://mastodon.social/media/Y24AHCBl_AfnIPBiOy4
each point in one word moves to the nearest point in the next word https://mastodon.social/media/b-32WNDAFvd9XDXbM0A
adding a bit of randomness into the distance function... https://mastodon.social/media/-frlr0-lViTLcsO6EAY
based on yesterday's code, here's the entire alphabet (uppercase and lowercase) with each letter's points a little more than halfway lerped to the next https://mastodon.social/media/WsrdYvLeDyuV7MXCTWM
perhaps not surprisingly, a little less successful with a delicate serif font https://mastodon.social/media/sh4XjS9VxO7UmLWFgME
hi-res here since mastodon downsampled it http://static.decontextualize.com/alphabet-lerp-matrix-2018-01-07.png
today: speculative letterforms made from stitching together top/bottom halves of randomly rotated letters (I have more ambitious ideas for this, just trying to do some foundation work today) https://mastodon.social/media/IqrbvTrThNYXQkt8jrs
continuing this experiment: trying to match top halves to bottom halves based on how similar the points are through their horizontal center line (also fixed the rotation here to increments of π/8 radians) https://mastodon.social/media/JTU2hQbVTvoqYYC1Szc
stringing together middle segments of letterforms based on similarity at the top and bottom of the segments https://mastodon.social/media/gvRrkTRdANymacTOuFI
little experiment with animating this (reeeeally need to start doing all of this in webgl so I can get decent framerates) https://mastodon.social/media/u8A3eQR29dlibPKx0EA
back to perlin noise experiments, setting offsets and line thickness with thresholds https://mastodon.social/media/VZ8iUyK3ysVjh2xk63s
trying to figure out three.js. I am really not used to working with scene graphs https://mastodon.social/media/GquowEs33Gj5qrNuKaY
I keep on posting examples that look the same, but this one was made with a completely different set of tools: opentype.js -> svg-mesh-3d -> p5.js (kind of a hassle to get working tbh but will hopefully be wayyy more versatile for these experiments moving forward) https://mastodon.social/media/Mwq4ZesNa2DTzAv2_Qo
more rune.js, all coordinates in any path instruction that fall within the same grid square have the same elliptical motion https://mastodon.social/media/fBI-paErJelMayLrd4k
slower animation with chunkier grid and higher randomness thresholds. making myself queasy 🙃 https://mastodon.social/media/0FDPeV7ZDryHUFv63yI
going back momentarily to the letter morphing from earlier, here's the procedure applied to words from the phonetic random walk code that I posted examples of the other day... https://www.youtube.com/watch?v=ssQKONuFwvA
this one is difficult to explain but I think it actually has a lot of promise: visualizing a word's GloVe embedding by extending each point from the word center according to the value of each dimension of the embedding by dividing a circle into arcs according to the number of dimensions in the embedding https://mastodon.social/media/cZvGlIY_gC0nTg5NNOQ https://mastodon.social/media/I4dFbkuhiuqzrfEylcw (basically an adaptation of this http://static.decontextualize.com/vecviz/ but with the points of the font path instead of just a blob)
@aparrish I like where this is going. Makes me want to laser cut things
is it possible to show a sentence as a walk through semantic space & visualize it in a way that makes explicit (say) rhetorical parallelism?
@enkiv2 that is an interesting idea, hmm
@aparrish hard for me to imagine the right way to produce a flat projection
@enkiv2 i mean, tsne is the traditional choice for something like this
@aparrish t-distributed stochastic neighbour embedding?
@enkiv2 yep! for reducing the dimensionality to 2d. unless we're conceptualizing this task completely differently!
@aparrish The dimensionality reduction for a path wouldn't look like that for a word, right? If you had the word-blobs as points and positioned the lines between them, the two coordinate systems wouldn't match up.
@enkiv2 yes, the words themselves are just n-dimensional vectors. the way that I drew them in that experiment was to find n equally spaced angles and then draw a line whose length corresponds to each dimension—a hack to make it easier to see the values for each dimension. how to make that play nicely with a 2d projection of the word vectors themselves as points is up to you (or whoever!)
@aparrish I'm curious how this would change with an embedding like Conceptnet (I'm intrigued that they edited out a lot of the gender and racial bias from the training data). Would we see big changes in traditionally-gendered words like housekeeper? I've been trying to think of a demo other than analogy for showing the effect of the de-biasing.
@janellecshane i don't know! i've been meaning to try out the conceptnet vectors for a while. i think you'd also see notable effects from the de-biasing in, e.g. clustering and other unsupervised tasks that start with pretrained vectors, right?
Follow friends and discover new ones. Publish anything you want: links, pictures, text, video. This server is run by the main developers of the Mastodon project. Everyone is welcome as long as you follow our code of conduct!