More cursed C# observations: UIntPtr.Zero is not a compile-time constant for some reason, possibly because UIntPtr's size is not known at compile time (EDIT: if it's not clear, this is because C# acts as if it does not know what CPU architecture it will be running on until runtime). Therefore, you cannot create a method which takes a UIntPtr as an optional parameter with default argument UIntPtr.Zero.
Okay there turns out to be a solution but I can't tell if it makes the situation more or less cursed https://mdon.ee/@slyecho/112019623778803567
@mcc why would anyone ever need a data structure larger than 2^31 bytes?
@mikaeleiman @mcc it's a MS technology. Seems to make sense to paraphrase Bill Gates on the subject of 640k.
@oblomov @Flux @mcc I’m not a C# dev, but I think the focus is more on the UInt rather than the Ptr. I explained in this separate reply to a comment that I would expect a UInt type because it is specifying the size of a data type, which in theory should be positive. There are perhaps reasons why it is a signed integer though (as explained in my reply).
Per the “large static arrays” statement, I would assume for those arrays you would consider (in C parlance) `sizeof(int) * 1000` to denote the size of your allocated array in memory (here 1000). At least for the C# integral types, it appears to conform to that ideology. I’m not sure if C# would patch the “multiply by the size of the array” step and compute that internally. If that’s so, then I guess the “large static arrays” statement would make sense why a 64-bit (u)int would be expected. Again, I’m not a C# dev.
@waltertross @mcc ironic. We clearly need data structures larger than 2^31 today. The Gates paraphrase is to point out the lack of foresight in another MS technology.
Am I just too obtuse?
@mcc huh. Is that the default type for integer literals? Otherwise I don't get it...
@typeswitch @mcc I assume that the oddities of this observation are that they are signed integers (allowing negative ints) instead of unsigned (positive ints). In C99 we have `sizeof` return an unsigned integers, because it makes sense that the size of a type should never be negative. I’m not sure if this is what @mcc is talking about.
I mentioned this to my fellow C++ engineer coworker and he said that perhaps it was to avoid problems with over/underflowing unsigned integers. That could become weird in the situation like `sizeof(int8) - sizeof(int16)` returning a huge value, but also it allows for compiler optimizations since the C++ standard specifies that while unsigned integers can over/underflow, signed integer over/underflow is undefined behaviour. Compilers can exploit this assumption for optimization purposes.
I guess returning a signed integer isn't strange if the goal is to prevent weird underflows during comparisons. But why return an int32 instead of an ssize_t analog?
My hunch for why it returns an int32 was that it's treating sizeof(T) like an integer literal, and that C# treats integer literals as int32 by default (a bit like C ...). But I don't know if that's true.
@mcc Better yet it's int for a bunch of basic types and _implementation defined_ for others.
@mcc If you get a negative size, I assume that would be bad
@fishidwardrobe wait crap I didn't even notice that
@mcc Again, C# novice here, but I do get the impression much of this was designed by drunkards rolling dice … "Okay, this method will … return the value in an output parameter." "LOL good one."
@fishidwardrobe@mastodon.me.uk @mcc@mastodon.social in the new paradigm of the negative size world, who's to say which assumptions hold true.
@mcc what happens if you have a type larger than INT32_MAX bytes
@scottcheloha that would seem to be the question, yes.
@mcc Blame Visual Basic 7 for deciding that unsigned integers didn’t belong in their language.
@mcc The whole Common Language Subset thing is so weird. The CLR was supposed to be polyglot from the jump, unlike Java, so they built a runtime but prohibited the standard library from using all of it for compatibility reasons. Then C# was so successful that everybody went for C# compatibility instead of CLS compatibility, and none of the polyglot stuff ever really materialized.
@mcc There are so many aspects of C# that are really attractive.
The one thing that gives me pause is its surface area. It's just so BIG.
This is a great example of what that can look like.
OK, this is something I do know a little bit about for C#.
Edited:
WRONG - if you set the build target platform to either x64 or x86 (=32 bit) IntPtr and UIntPtr should have a fixed size, and UIntPtr.Zero should be treated as constant in that case. With AnyCPU it's runtime.
WRONG - as an alternative you should be able to set an optional parameter to
= (UIntPtr)0
as a cast of a constant value should be a constant expression.
Disclaimer: I didn't test this before throwing it at you.
Normally, yes they are.
For example, C# considers these to be perfectly fine:
static void ConstParmTest2(ulong i, ulong j = (ulong)42) {
}
static void ConstParmTest3(ulong i, double d = (double)42) {
}
But it doesn't for IntPtr or UintPtr! It appears they are not treated as first-class system types, and C# actually applies a static conversion function for casts to them from "normal" integer types.
Today I learned...
I wrote some code to test it, and found out I was completely wrong about everything.
UIntPtr.Zero doesn't become a constant, as I'd thought, if you set the platform type. Zero is defined as a static, which seems completely wacko!
Using (UIntPtr)0 is also rejected as not being a run-time constant.
Given this, the only way I see to do what you wanted is an old-fashioned overload:
int Foo(int i) {
return Foo(i, UIntPtr.Zero);
}
int Foo(int i, UIntPtr p) {
...
}
It's even more cursed than that, BTW:
Even UIntPtr.MaxValue or UIntPtr.MinValue are static properties rather than constants, whereas for all the normal integer types those are constant values.
Super double yikes!
@mcc That's … I get why, but in a widely-used language by a large company that's been around this many years, I would have expected that to have been fixed.
@mcc This is not really due to UIntPtr size or anything. It's more of a limitation of what kind of expressions can be used for parameter default values, and there are quite a few.
The easiest solution is to use the default value of the struct:
void Test(UIntPtr p = default);
The default value for a struct is zero-initializing all the values.
@mcc More
@mcc you probably have a good reason to use a default parameter instead of a zero-arity overload of the method, which is how the framework tends to work around this limitation.
@mcc also, more cursed for sure
Damn, wish I'd thought of that. It's still cursed, but it confirms it's not at heart a size issue.
As I explored this more I've found C# really thinks of IntPtr or UIntPtr as being like regular user-defined struct, not fundamental integer types.
Here's where it gets really WTF.
First, I have *two* actual improved solutions for you now if you're using the latest C# version:
void UIntPtrDefaultTest1(int i, nuint u1 = 0) {
}
void UIntPtrDefaultTest2(int i, UIntPtr u2 = (nuint)0) {
}
+
Given those are valid, if nuint is just a compiler synonym for UIntPtr, then these should be valid too, right?
void UIntPtrDefaultTest3(int i, UIntPtr u3 = 0) {
}
void UIntPtrDefaultTest4(int i, UIntPtr u4 = (UIntPtr)0) {
}
Roslyn: Absolutely not, get the fuck out of here with that!
CS1750 A value of type 'int' cannot be used as a default parameter because there are no standard conversions to type 'UIntPtr'
CS1736 Default parameter value for 'u4' must be a compile-time constant
+
This is of course completely inconsistent with how other fundamental integer types are treated.
I respect many of the early design decisions in C#, and many of the improvements made over time, but things like this make me feel that it's started going off the rails in recent years.
+
"But Clifton", someone might say, "maybe this is just because C# wants to maintain a distinction between the compiler type names and the underlying value types." (Nobody is going to say that, but play along here.)
Nope. If we try the exact parallel format with UInt32 (System.UInt32) instead:
void UIntDefaultTest1(int i, UInt32 u1 = 0) {
}
void UIntDefaultTest2(int i, UInt32 u2 = (UInt32)0) {
}
Roslyn: This is fine. Nothing to see here, move along.
In short, it's messed up.
@mcc my kindergartener has started saying “cursed” and I don’t know how I feel about it