mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

339K
active users

#tex

11 posts8 participants0 posts today
Continued thread

[1/3] #DANTE2025 Friday starts with “Storytelling with Frank [Mittelbach]”. He admits that it took a while for himself to realize how bad we treated users relying on assisting technology like screen readers! He tries to give a rough overview of this impression by playing screen reader recordings of an untagged document compared to the work the tagging-project did so far. The examples can be found at latex3.github.io/tagging-proje

Continued thread

#DANTE2025 is starting at the University of Applied Sciences Darmstadt.

Mr. Lion and me are extremly curious about discussions and expecting a boost concerning tagging and accesibility of #TeXLaTeX.

We already realized yesterday, that there is too little time for all the projects we are working on… but … we are all trying our best.

I will try to sum up the talks below this.

EN in first answer.

Die #DANTE2025 startet an der Hochschule Darmstadt.

Der Löwe und ich sind startklar gespannt auf die Diskussionen. Vermutlich wieder viel zu tagging und Barrierefreiheit.

Gestern beim Vorabendtreff haben wir schon festgestellt, dass alle hier noch viel zu viel Dinge getan haben wollten… aber leider hat der Tag nur 24h … aber wir bleiben dran.

Unter diesem Toot versuche ich wieder etwas die Vorträge zusammenzufassen.

[2503.24187] NeuRaLaTeX: A machine learning library written in pure LaTeX
arxiv.org/abs/2503.24187

Wait, what written in WHAT??

#arXiv #ML #MachineLearning #LaTeX #TeX #AprilFools (maybe)

arXiv logo
arXiv.orgNeuRaLaTeX: A machine learning library written in pure LaTeXIn this paper, we introduce NeuRaLaTeX, which we believe to be the first deep learning library written entirely in LaTeX. As part of your LaTeX document you can specify the architecture of a neural network and its loss functions, define how to generate or load training data, and specify training hyperparameters and experiments. When the document is compiled, the LaTeX compiler will generate or load training data, train the network, run experiments, and generate figures. This paper generates a random 100 point spiral dataset, trains a two layer MLP on it, evaluates on a different random spiral dataset, produces plots and tables of results. The paper took 48 hours to compile and the entire source code for NeuRaLaTeX is contained within the source code of the paper. We propose two new metrics: the Written In Latex (WIL) metric measures the proportion of a machine learning library that is written in pure LaTeX, while the Source Code Of Method in Source Code of Paper (SCOMISCOP) metric measures the proportion of a paper's implementation that is contained within the paper source. We are state-of-the-art for both metrics, outperforming the ResNet and Transformer papers, as well as the PyTorch and Tensorflow libraries. Source code, documentation, videos, crypto scams and an invitation to invest in the commercialisation of NeuRaLaTeX are available at https://www.neuralatex.com

Was looking at the source to a very early arXiv paper (arxiv.org/abs/hep-ph/9210243). The PDF is unavailable, for reasons that are obscure ("pre-1996 submission which cannot be processed"). But there's a lot of history in the source code: it looks like it was submitted, as a single file, emailed from BITNET to the arXiv via a gateway. It also uses a now-obscure TeX package phyzzx (ctan.org/tex-archive/obsolete/).

I know I'll sound like a young person when I say this but I'd love to know how that worked in practice and what it was like to be in academia before everyone had access to a TCP/IP internet connection but after internetworked computers were ubiquitous. Sort of like the TV series Halt and Catch Fire but with physicists.

wacoca.com/news/2485835/ 米国株式市場=続落、関税巡るインフレ懸念高まる テック銘柄に売り | ロイター #.NJP #AMERS #APPA #Asia #AUNZ #AUT #Auto... #ca #CARM #CARM1 #CMPNY #CYCP #CYCP08 #CYCS #CYCS08 #DEST:NOJPBSM #DEST:NOJPTPM #DEST:NOJPZTM #HECA #HFAC #HLTHSE #HPAS #JFOR #JLN #MEDFRM #MKTREP #NAMER #NZ #PRIVT #PUBL #REP #RHPI:HEALTHCAREPROVIDERS #STX #TEX #TRN #UnitedStatesOfAmerica #US #USA #wear #WEAR1 #アメリカ合衆国 #米国

Standard interchange image format for Open Source DTP

Hi people,

with all the mayor GUI #DTP software reaching their peak: #GIMP, #Scribus and #Inkscape; and the well established #TeX software #ConTeXt and #SpeedataPublisher, and #ImageMagick we need a serious discussion about which image format we should adopt that works seamlessly across all these applications and it is generally accepted in the press workflow, whereas is embedded or not into a PDF file.

It should be something that has good quality, supports #CMYK and Alpha channels and it is #opensource friendly and perhaps an #openstandard.

I would dare to say that perhaps jpeg2000 matches most of them, it exists from a very long time and can be adopted as common ground, however it is need a combined effort, if what I am writing makes sense for you.

ImageMagick supports it, Gimp 3 currently doesn't, #LuaTeX and #LuaMetaTeX neither. I guess that Inkscape once it will be able to export in CMYK it will decide which format using to store images in CMYK with alpha into a PDF.

It would be very beneficial if we can adopt a format that works everywhere and it is press friendly, thanks.