"The user manual contains some significant errors. Most of these are due to last minute changes to achieve a greater degree of compatibility with IBM's implementation of MS-DOS (PC DOS). This includes the use
of "\" instead of "/" as the path separator, and "/" instead of "-"
as the switch character."

it's always kind of interesting when you encounter a fossil trace of someone's Giant Mistake as it happened.

@brennen The weird thing about this is that "/" was the switch character on RT-11, which CP/M imitated (including using "/" as the switch character in PIP.COM), and MS-DOS 1.0 was a carbon copy of CP/M (though I don't remember if it had PIP).

@kragen @brennen There's a reason why MS-DOS 2.0 specifically was planned to use / as the path separator and - as the switch, though.

MS-DOS 1.x was absolutely a CP/M clone, but MS-DOS 2.0 was intended to be something entirely different - the eventual goal was to turn MS-DOS into a single-user, single-tasking Unixlike, with Xenix binary compatibility (much like the goal of Heinz Lycklama's Version 6 Unix-derived LSX).

Obviously things did not ultimately go that direction, but a fair amount of Unix semantics made it into MS-DOS as a result anyway.

@bhtooefr @kragen @brennen
This is something I hadn't heard!

I knew that directories were one of several features Microsoft added that were taken from UNIX, & that Microsoft's status as a UNIX vendor at the time was related, but I was unaware of any plan to make them actually binary-compatible! That would have been a very interesting system.

Was Xenix even using 16 bit words? Unix on micros usually had 18-bit words, right? Unix on micros and minis used 16-bit words, afaik there's actually never been a port of UNIX to an architecture that uses a non power of two word size but I could be wrong about that.

@ACE_Recliner @brennen @bhtooefr @kragen
Huh, that strikes me as odd. I was pretty sure that earlier PDP models than the one UNIX was developed on had 18 bit words, & it seems a little weird to change the word length to something without common factors. But, that was early days & maybe digital didn't care the way intel did.

I recall that, somehow, MINIX had 9-bit *bytes*. (At least, a friend who was porting MINIX to modern hardware said that & I don't think he was screwing with me.)

@enkiv2 @kragen @brennen @ACE_Recliner The PDPs date back to the era when each new model got a new architecture...

The PDP-11 was the first (and last) of the 16-bit PDP family (with the VAX being a 32-bit continuation of the PDP-11 architecture).

There were also 18-bit, 12-bit, and 36-bit PDPs - Unix originally targeted the PDP-7, one of the 18-bit PDPs, but was quickly ported to PDP-11.
@enkiv2 @kragen @brennen @ACE_Recliner I think what happened here is... 36-bit was a common size for business machines (as 35 bits was the minimum to store -9,999,999,999 through +9,999,999,999, and 10 digits was the norm for adding machines and the like). 18-bit (and storing a carry somewhere) was what you did when you wanted half of the word length to reduce complexity.

6-bit characters fit well into this model, and that led to 12-bit machines, for when you wanted to cut an 18-bit design down further for non-business uses.
@enkiv2 @kragen @brennen @ACE_Recliner But, IBM decided to go 32-bit for the System/370, and that became the standard for *everything* afterwards - cutting it down resulted in 16 or 8 bits. I think another factor was that bitslicing was easier to do with powers of 2, so minis ended up adopting it rather fast once they moved from transistors to MSI logic, along with the evolutionary pressures exerted by microprocessor design as well. But yes, if you're looking to answer questions like why a byte is universally 8 bits and stuff like that, answer is pretty much the System/360 without fail.

@ACE_Recliner @enkiv2 @kragen @brennen I do also suspect the adoption of BCD floating point arithmetic may have had something to do with it, too - now, your 10 digit mantissa needs 40 bits, you need another few nibbles (in practice, at least two) of exponent, and you need at least two sign bits (for both mantissa and exponent) somewhere. So, a 36-bit architecture has little advantage over a 32-bit architecture - either way you need two registers to store your BCD floating point number.

And, BCD numbers can easily be split into 4 bit chunks.
@ACE_Recliner @enkiv2 @kragen @brennen Going even further off topic... this is why HP calculators historically had a 56-bit word. 10-digit (nibble) mantissa, 2-digit exponent, and a digit each for mantissa and exponent sign (the extra bits in the sign digits were used to store some display formatting state, IIRC, as register contents were pushed to the LED drivers almost unmodified IIRC).

Starting in the mid 1980s, HP's calculators moved to a 64-bit word, which they still use today - 12-digit mantissa, 2.5-digit exponent, with a digit for mantissa sign, and IIRC the exponent sign encoded in the exponent (hence the 2.5-digit exponent instead of 3-digit).
@ACE_Recliner @enkiv2 @kragen @brennen The ALU was bit-serial on the 56-bit calculator processors, and it was 4 bits wide on the Saturn processor originally used in the 64-bit calculators.

Note that none of these processors exist any more - all current HP calculator designs fit into one of three categories: a Chinese 6502-derived design with a unique number format, an ARM design emulating the 56-bit architecture, or an ARM design running a C translation of the Saturn assembly math libraries. (A few now discontinued calculators such as the 50g were ARMs running a Saturn emulator.)

@bhtooefr @ACE_Recliner @enkiv2 @brennen I think BCD floating point started to die out around the time of the 360 (and nobody else adopted the 360's hexadecimal floating point either, just like EBCDIC) but fixed-point BCD remained important until at least the 1980s Supporting this as well, one of the first major bit-sliced computer designs (the Xerox Alto) used 16-bit words despite most of it's original technical backers coming from SDS and BCC which produced transistorized computers operating on 36 or 24 bit words

@ACE_Recliner @brennen @enkiv2 @bhtooefr The System/360 was certainly an influence, but I suspect that there was also an underlying logic: 6-bit bytes and word-addressable memories led to a lot of uncomfortable compromises in character-processing applications, and so machines designed to be good at character data processing needed byte-addressing and bytes of at least 7 bits. Nobody else adopted the 360's EBCDIC—though Univac used FIELDATA for a while, everyone else went ASCII.

@brennen @kragen @enkiv2

[One of these days the VAX cluster my officemate still has running will fully and finally die, and my office will suddenly be a *lot* quieter!]

@keithzg @enkiv2 @kragen @brennen @bhtooefr this also helps explain why all the 1970s RAM ICs are odd-by-modern-standards 1 or 4 bit.

@bhtooefr @enkiv2 @brennen @ACE_Recliner Right, although in fact there were a large number of PDP-11 models.

@enkiv2 @kragen @brennen @ACE_Recliner Going back to this, there are some places in a *nix where you have to handle 9-bit values, but I'd be extremely surprised if a byte in MINIX was anything other than 8 bits.

@enkiv2 @ACE_Recliner @brennen @bhtooefr @kragen Unix was implemented on a PDP-7 and later ported to PDP-11. You might be thinking of TOPS-10 and TOPS-20, which were implemented for the PDP-10 and compatible family of computers.

That said, Unix is agnostic about word-length, except insofar as pointers are required to fit in a single word.

@enkiv2 @ACE_Recliner @brennen @bhtooefr @kragen Minix was initially released for the 8086, as I recall; I think he was pulling your leg.

@vertigo @ACE_Recliner @brennen @bhtooefr @kragen
I'm sorry, I meant to write MULTICS. (No idea how I managed to screw that one up!)

@enkiv2 @kragen @brennen @ACE_Recliner @vertigo 9 bits on the hardware that MULTICS ran on is completely reasonable, they were 36-bit architectures. You had either 9 or 6-bit bytes commonly, as a result.

@bhtooefr @kragen @brennen @ACE_Recliner @vertigo
Yup. It was surprising to me since I thought byte length was standardized even when word length wasn't divisible by it.

Apparently assumptions about byte & word length were all over the codebase & gave him lots of hassle.

I recall hearing about a working port a while back but I have no idea if it's the same one this guy was working on. He moved to finland suddenly & then later dropped off the grid.

@enkiv2 @ACE_Recliner @brennen @bhtooefr I think MINIX always had 8-bit bytes, but yes, the original "Unix" ran on the 18-bit PDP-7, from 1969 to 1970. Eventually PDP-7 Unix did do multitasking, but I don't think it ever got, for example, a hierarchical filesystem. is pretty much the only source on this kind of thing.

And yeah, the PDP-7 and PDP-11 were unrelated instruction set architectures.

@enkiv2 @brennen @kragen Depended on the platform, but Unix as we know it (as in, after it moved to the PDP-11) started out with 16-bit words, and on 8086/80286 builds it was 16-bit.

(Unix as it existed on the PDP-11 mapped incredibly well to the 8086 and 80286 - AFAIK the 8086/80286 segmentation model was naturally pretty close to the PDP-11 bankswitching model.)

@bhtooefr @brennen @kragen
It makes sense. The 8008 was developed because somebody thought the 4004 reminded him of a PDP1 & wanted to enhance the similarities in order to take advantage of them, right? So, Intel had folks familiar with the arch.

@enkiv2 @bhtooefr @brennen No, the 8008 and 4004 are totally different, unrelated architectures. Check out the datasheets. The 8008's instruction set cloned the instruction set of the discrete-logic processor in the Datapoint terminal they developed the 8008 for.

@bhtooefr @brennen @kragen Alright. (This is just something I'm repeating from Fire in the Valley, & so it may have been misreported or I may have misremembered or misrepresented it.)

@kragen @brennen It had COPY instead of PIP. Only the kernel was a clone. The command set was original, IIRC.

@vertigo @brennen Yeah, it had COPY and RENAME. My first OS was HDOS, which had both PIP
and more user-friendly commands like COPY, which were implemented by translating them into PIP commands by SYSCMD.SYS.

Sign in to participate in the conversation

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!