There is definitely a path to abstracting and centralizing the dictionary view logic such that I could build multiple viewing areas.

Finished up the viewing of the file attributes. I need to think about it some more as I don't like the code for it. It is messy and infectious. There is too much logic spread out that it will be painful to debug later.

I'll need to deal with it sooner rather than later. One big thing with dealing with BASIC is trying to keep everything localized. Without scoping rules, keeping things centered is key.

Going to start working on adding the dictionarys. I ran into a situation where I wanted to quickly edit some values on the record but had to leave the editor to see what each piece of data means.

In UniVerse the dictionary is the schema of a data file. The big thing is the dictionary is optional, it's only with some strict rules that you can actually just read a dictionary and display that information along side the data.

Luckily most of the data we deal with is heavily managed.

I don't think there will be a way to break past this version without some new understanding. 7ms isn't bad but it is noticeable. Next place to try and get some seconds is the display.

I prefer the concatenation one as I don't need to inject in dummy text into blanks, the problem with -1 append is that if the elements its append are null, the attribute mark never gets added. This results in weird arrays where there are nulls.

It's much more straightforward though it is a bit less than optimal.

This next version uses -1 to append things to the end of the array. This seems to be faster than doing concatenation but same ball park.

This runs in 5ms.

This next version uses string concatenation to speed things up. I'm guessing that the version with CTR is actually finding the position by going along the string. In Universe, a multivalue array is ultimately a string so it seems refering to the position is actually quite expensive.

The below option dodges all that and straight up adds things to the end of the string.

This runs in 7ms. About a factor of 100 difference!

Doing some performance testing on my convert raw lines function. It takes a set of lines and makes them into a set of lines that I can display and number properly.

This version uses a CTR variable to properly place things in the list. This runs in about 100ms. I'm just roughly doing this soooo this isn't going to be exact.

A database that loses your data and an editor that ruins your files.

Cross my fingers and hope to die that I never fuck up {rhyme with die}.

Finished up the voiding logic for the editor. I generate a MD5 sum of the file and save that and then next time the file opens, it checks to see if the hash is the same and if it isn't, it wipes the UNDO stack. Easy peasy.

A worrying thing is my editor randomly went haywire and got into some infinite loop of displaying text. Luckily killing it was enough and all I lost were the changes I was making. No idea what triggered it. Going to have to see if it comes up again but goddamn.

One thing I still need to deal with is how to handle edits by other systems. I'm curious how vim deals with this. Maybe it saves a checksum as it saves and if the file changes before vim opens it then it voids the undo file. That might be a good idea regardless.

Had to do some juggling in the editor with how I persist my undo stack. I have it working pretty well now. I chose 10 just so I can figure out what I was going to do once the cap is hit. For now I think I'll set it to a 1k buffer. Hopefully that's enough. I need to watch it to see how much changes that really is. I track each character so i imagine it will go quick. Currently once I hit the cap, I take the recent half and make that the front and this lets me re-use half the buffer.

This also opens the doors to infinite history but I'll hold off on that until I actually start using the editor this week. I need to figure out how I want to handle auto formatting files and the idea of having a checkpoint. I can't really keep infinite history as the structure I'm using can only keep 64k actions.

Some clean up required but! I got the undo/redo system working properly. I'm pretty sure it works the way I think it does. I'm pretty happy with how it came together and it is quite elegant now. The original idea of having just one stack I think was the problem and it made everything more complex than it needed to be.

I can now happily delete hundreds of lines and flip between them without things lagging up.

I need to have an intermediate step of converting any string versions of my dummy string with an alternative and then changing it back after I've removed the dummies. This is definitely gross but the speed gain is worth it. I need to think of a better answer though. Hackery

As you might see, this means I can't edit inside the EVA editor as it will replace the string.

A bit of funkiness I ran into was that -1 to append data woudn't actually append nulls. This wasn't a problem in BASIC programs because there we have a minimum of a * on blank lines. However data files do have blanks so we would need to make sure blanks do fill in the lines. So I place a dummy value which I then remove.

This is the new code now that I know that doing the <CTR> access is expensive. I could have used -1 before but I had made the decision to be more explicit about what I was writing but in this case that bit me. This code is now significantly faster. I ran a test case and the different between <-1> and <CTR> is about a factor of 10 to 100. I can now use dd in my editor comfortably.

This is the original code I was using to chunk the raw data into displayable lines. This was one of the slower parts of the system. This is ultimately because of <CTR> which is an array access but in UniVerse this ends up being very expensive.

krowe boosted

Unless you were there, you really cannot appreciate how much damage Intel did to the PC ecosystem with the 286.

The 286 was the successor to the 8086 and 8088 CPUs that powered the original IBM PCs. It offered a huge step forward from them.

Those older chips always ran in "real mode," where memory locations had fixed addresses, and any running program could modify the contents of any address. This meant that you couldn't have two programs running at once, because one might try to use a bit of memory the other was already using, and blammo!

The 286 introduced "protected mode," which prevented programs from being able to mess with memory allocated to other programs. Instead of addresses corresponding directly to blocks of memory, in protected mode they were treated as "virtual" addresses, and mapped to memory allocated just for that program.

Protected mode meant the days when one program could reach into another one and mess with its memory would be over. And that opened up all sorts of possibilities. You could have real multitasking! A whole range of crash bugs would be instantly eliminated! Suddenly the PC began to look like a machine that you could put against a UNIX workstation with a straight face.

But there was a problem. To maintain backwards compatibility with the old chips, the 286 had to boot into real mode. It could then shift into protected mode on demand. But -- and this is a big BUT -- once it was shifted into protected mode, IT COULD NOT SHIFT BACK. The only way to get back into real mode was to reboot the PC.

Which was a problem, because every PC user owned a huge library of DOS software, much of which could only run in real mode. So the 286 gave you multitasking -- but if you ever needed to run a real-mode program, you had to reboot your PC (and lose all the other running programs) to run it.

This was, as you may imagine, not ideal.

Show older

The original server operated by the Mastodon gGmbH non-profit