I saw in a question about classes and structures that the latter should have a maximum of 16 bytes.
Why do you have this limitation?
I saw in a question about classes and structures that the latter should have a maximum of 16 bytes.
Why do you have this limitation?
In fact you can use the size you want, the recommendation exists for the sake of efficiency. This recommendation is just to alert you to investigate further whether it will be a good choice if you run away from any such item.
I do not know if read here or elsewhere. There it says that struct
is always a type by value, so an instance of it is the object itself. Whenever you copy your value you have to copy the object, if it is too large it does not get very efficient. A type by reference has the object in heap and what is copied is only the pointer, at most 8 bytes.
If you are wondering if there is a lot of copy, there is, yes, every time you assign the value there is a copy, if you pass it as argument there is copy, until the use there is copy to the recorder and possibly to the stack. Of course the copy to registrar or local ( stack ) is very fast. If the object already exists the copy in heap can be more or less fast as well. But there is cost.
But I have already tested and I have seen that in certain architectures this limit can be exceeded. If the cache line of the processor is higher the performance to copy 1, 4, 16 or 64 bytes is practically the same, carrying 128 bytes will surely consume at least twice the time of 64 bytes (probably more to manage complexity), at least in most current architectures. Most architectures now use 64-byte cache , so all internal physical transport of bits occurs in blocks of 512. Transfer 1 or 64 bytes gives the same. There will be a difference to accommodate this data.
Of course, if the architecture does not have this type of optimization can make a big difference. It may be up to 16 bytes long, although the difference will not be very large. 16 bytes usually goes well in all architectures and is sufficient for objects by value. If you use more than this you may be doing something that should no longer be worth.
In addition there may be some compiler optimization or JITter that helps certain sizes. It is good to keep in mind that this is implementation detail and it may be that in the past 16 bytes was the recommended, but now it is over. I have read that the trigger to change the generated code is 24 or 32 bytes in 64 bits. I never found official information. I do not know the details but it is likely to use more modern instructions ( SSE ) that allows single-copy copying. If you do not use such a statement the copy should occur in steps, which will slow down.
If you need extreme optimization, if you want to avoid pressure on the garbage collector it is possible to abuse a little of this type. This site uses this a lot in its architecture. You have to know what you are doing because there is a different semantics in being by value or reference, especially if the object is changeable .
In cases of abuse, you can avoid copying with ref
, where the object is referenced and not copied, as if it were a reference type. Then the copy will be at most 8 bytes. It is not always possible to use this type of artifice. But in C # 7 this has improved a lot because in addition to parameters it is possible to return a ref
and use in local variables. And there is a proposal to use ref
in an object's fields, lambdas , etc.
Anyway, it's good not to abuse, not to premature optimization . You have to observe where and how that object will be used, as well as how many instances there will be.
My experience is that the biggest problem is the objects of average life (objects that arrive in Gen1 mainly 2 and die next), although some too short life is a waste to be allocated in heap .
If you know what a struct
is doing with hundreds or thousands of bytes may be a good idea, albeit rare.
I'll see if I take a test demonstrating this as soon as I have more time today. You have to be careful not to fall into a trap. Today I think it comes out:)