I have a vague goal of doing daily or at least regular typography experiments here in 2018, here's day one: applying perlin noise to positions of points along the path of the text mastodon.social/media/P-VAKr6F

in the same vein more or less, hmm. probably not that much more of interest can be done with just having a list of x, y coordinates mastodon.social/media/Y24AHCBl

each point in one word moves to the nearest point in the next word mastodon.social/media/b-32WNDA

adding a bit of randomness into the distance function... mastodon.social/media/-frlr0-l

based on yesterday's code, here's the entire alphabet (uppercase and lowercase) with each letter's points a little more than halfway lerped to the next mastodon.social/media/WsrdYvLe

perhaps not surprisingly, a little less successful with a delicate serif font mastodon.social/media/sh4XjS9V

again building off the same code, a matrix showing all possible halfway-interpolations between points in the letters (i.e., top row is (A+A)/2, (A+B)/2, (A+C)/2, etc.) (matrix is not symmetric because letters have different numbers of points)

today: speculative letterforms made from stitching together top/bottom halves of randomly rotated letters (I have more ambitious ideas for this, just trying to do some foundation work today) mastodon.social/media/IqrbvTrT

continuing this experiment: trying to match top halves to bottom halves based on how similar the points are through their horizontal center line (also fixed the rotation here to increments of π/8 radians) mastodon.social/media/JTU2hQbV

stringing together middle segments of letterforms based on similarity at the top and bottom of the segments mastodon.social/media/gvRrkTRd

(has some random rotation on the points of the letterforms themselves and on where each segment is drawn beneath the previous)

little experiment with animating this (reeeeally need to start doing all of this in webgl so I can get decent framerates) mastodon.social/media/u8A3eQR2

back to perlin noise experiments, setting offsets and line thickness with thresholds mastodon.social/media/VZ8iUyK3

trying to figure out three.js. I am really not used to working with scene graphs mastodon.social/media/GquowEs3

I keep on posting examples that look the same, but this one was made with a completely different set of tools: opentype.js -> svg-mesh-3d -> p5.js (kind of a hassle to get working tbh but will hopefully be wayyy more versatile for these experiments moving forward) mastodon.social/media/Mwq4ZesN

much smoother with the webgl renderer in p5.js but I need to figure out why it's drawing these extraneous lines. tomorrow i guess mastodon.social/media/-eQ4kYNU

tonight trying out rune.js, stretching out the font paths mastodon.social/media/S4qcJ-tG

more rune.js, all coordinates in any path instruction that fall within the same grid square have the same elliptical motion mastodon.social/media/fBI-paEr

slower animation with chunkier grid and higher randomness thresholds. making myself queasy 🙃 mastodon.social/media/0FDPeV7Z

going back momentarily to the letter morphing from earlier, here's the procedure applied to words from the phonetic random walk code that I posted examples of the other day... youtube.com/watch?v=ssQKONuFwv

feeling like I'm getting stuck on a particular set of ideas here, will try to shake it up soon

this one is difficult to explain but I think it actually has a lot of promise: visualizing a word's GloVe embedding by extending each point from the word center according to the value of each dimension of the embedding by dividing a circle into arcs according to the number of dimensions in the embedding mastodon.social/media/cZvGlIY_ mastodon.social/media/I4dFbkuh (basically an adaptation of this static.decontextualize.com/vec but with the points of the font path instead of just a blob)

@aparrish fascinating!

@aparrish I like where this is going. Makes me want to laser cut things

@aparrish
is it possible to show a sentence as a walk through semantic space & visualize it in a way that makes explicit (say) rhetorical parallelism?

@enkiv2 that is an interesting idea, hmm

@aparrish hard for me to imagine the right way to produce a flat projection

@enkiv2 i mean, tsne is the traditional choice for something like this

@aparrish t-distributed stochastic neighbour embedding?

@enkiv2 yep! for reducing the dimensionality to 2d. unless we're conceptualizing this task completely differently!

@aparrish The dimensionality reduction for a path wouldn't look like that for a word, right? If you had the word-blobs as points and positioned the lines between them, the two coordinate systems wouldn't match up.

@enkiv2 yes, the words themselves are just n-dimensional vectors. the way that I drew them in that experiment was to find n equally spaced angles and then draw a line whose length corresponds to each dimension—a hack to make it easier to see the values for each dimension. how to make that play nicely with a 2d projection of the word vectors themselves as points is up to you (or whoever!)

@aparrish I'm curious how this would change with an embedding like Conceptnet (I'm intrigued that they edited out a lot of the gender and racial bias from the training data). Would we see big changes in traditionally-gendered words like housekeeper? I've been trying to think of a demo other than analogy for showing the effect of the de-biasing.

@janellecshane i don't know! i've been meaning to try out the conceptnet vectors for a while. i think you'd also see notable effects from the de-biasing in, e.g. clustering and other unsupervised tasks that start with pretrained vectors, right?

Follow friends and discover new ones. Publish anything you want: links, pictures, text, video. This server is run by the main developers of the Mastodon project. Everyone is welcome as long as you follow our code of conduct!