Allison Parrish is a user on mastodon.social. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.

I have a vague goal of doing daily or at least regular typography experiments here in 2018, here's day one: applying perlin noise to positions of points along the path of the text mastodon.social/media/P-VAKr6F

in the same vein more or less, hmm. probably not that much more of interest can be done with just having a list of x, y coordinates mastodon.social/media/Y24AHCBl

each point in one word moves to the nearest point in the next word mastodon.social/media/b-32WNDA

based on yesterday's code, here's the entire alphabet (uppercase and lowercase) with each letter's points a little more than halfway lerped to the next mastodon.social/media/WsrdYvLe

perhaps not surprisingly, a little less successful with a delicate serif font mastodon.social/media/sh4XjS9V

again building off the same code, a matrix showing all possible halfway-interpolations between points in the letters (i.e., top row is (A+A)/2, (A+B)/2, (A+C)/2, etc.) (matrix is not symmetric because letters have different numbers of points)

today: speculative letterforms made from stitching together top/bottom halves of randomly rotated letters (I have more ambitious ideas for this, just trying to do some foundation work today) mastodon.social/media/IqrbvTrT

continuing this experiment: trying to match top halves to bottom halves based on how similar the points are through their horizontal center line (also fixed the rotation here to increments of π/8 radians) mastodon.social/media/JTU2hQbV

stringing together middle segments of letterforms based on similarity at the top and bottom of the segments mastodon.social/media/gvRrkTRd

(has some random rotation on the points of the letterforms themselves and on where each segment is drawn beneath the previous)

little experiment with animating this (reeeeally need to start doing all of this in webgl so I can get decent framerates) mastodon.social/media/u8A3eQR2

Allison Parrish @aparrish

back to perlin noise experiments, setting offsets and line thickness with thresholds mastodon.social/media/VZ8iUyK3

· Web · 2 · 8

@aparrish this is wonderful. reminds me a lot of the work tomato did for underworld in the late nineties

trying to figure out three.js. I am really not used to working with scene graphs mastodon.social/media/GquowEs3

@aparrish humble request to see the word "blep" using this animation

I feel it is an excellent candidate

@aparrish [SQUEALS OF DELIGHT]

YOU MADE MY WEEK, THANK YOU SO MUCH

I keep on posting examples that look the same, but this one was made with a completely different set of tools: opentype.js -> svg-mesh-3d -> p5.js (kind of a hassle to get working tbh but will hopefully be wayyy more versatile for these experiments moving forward) mastodon.social/media/Mwq4ZesN

much smoother with the webgl renderer in p5.js but I need to figure out why it's drawing these extraneous lines. tomorrow i guess mastodon.social/media/-eQ4kYNU

@aparrish for a sec i thought this was a procedural animated captcha, its a cool effect

@chr the effect is just adding perlin noise to the coordinates of each point! just an easy way to verify that I'm drawing the triangles in the right place and that I can write code that affects the way the shapes are drawn by modifying those coordinates

@aparrish I've been digging around this space a bit while trying to find a good rendering library.

One of the reasons I chose to avoid rune.js was it targets svg. It has some neat font stuff too printingcode.runemadsen.com/ex (apparently using opentype behind the scenes).

I saw that someone was using two.js to generate sprites from svg for pixi.js to render. uihacker.blogspot.com/2014/10/ - Doesn't help for me wanting to realtime game stuff, but two.js might be a more direct alternative vs meshifying?

@ultimape thanks for the pointers! i didn't know about two.js. ultimately i don't care about fast rendering, what i want is just intuitive access to the geometry in a way that makes it easy to play around with the glyphs procedurally... ultimately i don't think the mesh is what i want but all of the other ways of converting the shapes to xy coords make it difficult to render the glyphs as solid w/o filling in holes. probably i just need to get used to working with the path data itself 🤷🏻‍♀️

@aparrish Ah, if you're not focused performance, maybe that rune library would actually work then?

font-> rune.js -> svg ->two.js -> canvas?

As an aside, how are you exporting those gifs?

I haven't looked at it yet for p5.js, but I know processing proper could record canvas's pretty easily. I've used github.com/spite/ccapture.js/ in the past and github.com/Automattic/node-can seems like it might work too. I wanna let people capture high quality anim's from the game i'm working on.

@ultimape thanks! i've been using licecap because it's easy for now. very familiar with node-canvas but these are just daily experiments, not trying to do anything high quality yet. just a note, i am not a beginner—i teach creative coding (with processing and p5js) at the college level and have written a book on the topic for maker media. i actually used to share an office with rune madsen (who made rune.js)!

@aparrish whoa! I knew you were pretty cool, but that's awesome.
Rune seemed like a diamond in the rough when I found it.

Hope I didn't come off as patronizing. It looked like you were exploring workflows for rendering text and ran into a snag.

The frameworks I dug into are fresh on my mind and it makes me happy when I can help people do more cool stuff.

Please keep blogging your progress. It is very inspiring to me. 👍

@aparrish is p5.js your preferred framework/library? I've tried a few of the Ruby versions of processing (JrubyArt, propane) but never quite got them working right. I have tried p5.js, processing and even paperscript.js. I'm simply not too fond of working in JS.

@luisroca oh I hate working in javascript, it's a truly lousy language with lousy toolchains. I do like p5js, though, for certain applications (interactive sketches where you don't want or need a scene graph). p5js is also the framework that my department uses for the intro programming classes (which I teach), so another benefit of using p5js for my own experiments is that I can easily incorporate them into tutorials for my students

more rune.js, all coordinates in any path instruction that fall within the same grid square have the same elliptical motion mastodon.social/media/fBI-paEr

slower animation with chunkier grid and higher randomness thresholds. making myself queasy 🙃 mastodon.social/media/0FDPeV7Z

going back momentarily to the letter morphing from earlier, here's the procedure applied to words from the phonetic random walk code that I posted examples of the other day... youtube.com/watch?v=ssQKONuFwv

feeling like I'm getting stuck on a particular set of ideas here, will try to shake it up soon

this one is difficult to explain but I think it actually has a lot of promise: visualizing a word's GloVe embedding by extending each point from the word center according to the value of each dimension of the embedding by dividing a circle into arcs according to the number of dimensions in the embedding mastodon.social/media/cZvGlIY_ mastodon.social/media/I4dFbkuh (basically an adaptation of this static.decontextualize.com/vec but with the points of the font path instead of just a blob)

@aparrish I like where this is going. Makes me want to laser cut things

@aparrish
is it possible to show a sentence as a walk through semantic space & visualize it in a way that makes explicit (say) rhetorical parallelism?

@aparrish hard for me to imagine the right way to produce a flat projection

@enkiv2 i mean, tsne is the traditional choice for something like this

@aparrish t-distributed stochastic neighbour embedding?

@enkiv2 yep! for reducing the dimensionality to 2d. unless we're conceptualizing this task completely differently!

@aparrish The dimensionality reduction for a path wouldn't look like that for a word, right? If you had the word-blobs as points and positioned the lines between them, the two coordinate systems wouldn't match up.

@enkiv2 yes, the words themselves are just n-dimensional vectors. the way that I drew them in that experiment was to find n equally spaced angles and then draw a line whose length corresponds to each dimension—a hack to make it easier to see the values for each dimension. how to make that play nicely with a 2d projection of the word vectors themselves as points is up to you (or whoever!)