Are there any platforms where pointers to different types have different sizes?

CPointersSizeof

C Problem Overview


The C standard allows pointers to different types to have different sizes, e.g. sizeof(char*) != sizeof(int*) is permitted. It does, however, require that if a pointer is converted to a void* and then converted back to its original type, it must compare as equal to its original value. Therefore, it follows logically that sizeof(void*) >= sizeof(T*) for all types T, correct?

On most common platforms in use today (x86, PPC, ARM, and 64-bit variants, etc.), the size of all pointers equals the native register size (4 or 8 bytes), regardless of the pointed-to type. Are there any esoteric or embedded platforms where pointers to different types might have different sizes? I'm specifically asking about data pointers, although I'd also be interested to know if there are platforms where function pointers have unusual sizes.

I'm definitely not asking about C++'s pointer-to-members and pointer-to-member-functions. Those take on unusual sizes on common platforms, and can even vary within one platform, depending on the properties of the pointer-to class (non-polymorphic, single inheritance, multiple inheritance, virtual inheritance, or incomplete type).

C Solutions


Solution 1 - C

Answer from the C FAQ:

>The Prime 50 series used segment 07777, offset 0 for the null pointer, at least for PL/I. Later models used segment 0, offset 0 for null pointers in C, necessitating new instructions such as TCNP (Test C Null Pointer), evidently as a sop to all the extant poorly-written C code which made incorrect assumptions. Older, word-addressed Prime machines were also notorious for requiring larger byte pointers (char *'s) than word pointers (int *'s).

>The Eclipse MV series from Data General has three architecturally supported pointer formats (word, byte, and bit pointers), two of which are used by C compilers: byte pointers for char * and void *, and word pointers for everything else. For historical reasons during the evolution of the 32-bit MV line from the 16-bit Nova line, word pointers and byte pointers had the offset, indirection, and ring protection bits in different places in the word. Passing a mismatched pointer format to a function resulted in protection faults. Eventually, the MV C compiler added many compatibility options to try to deal with code that had pointer type mismatch errors.

>Some Honeywell-Bull mainframes use the bit pattern 06000 for (internal) null pointers.

>The CDC Cyber 180 Series has 48-bit pointers consisting of a ring, segment, and offset. Most users (in ring 11) have null pointers of 0xB00000000000. It was common on old CDC ones-complement machines to use an all-one-bits word as a special flag for all kinds of data, including invalid addresses.

>The old HP 3000 series uses a different addressing scheme for byte addresses than for word addresses; like several of the machines above it therefore uses different representations for char * and void * pointers than for other pointers.

>The Symbolics Lisp Machine, a tagged architecture, does not even have conventional numeric pointers; it uses the pair (basically a nonexistent handle) as a C null pointer. > > Depending on the ``memory model'' in use, 8086-family processors (PC > compatibles) may use 16-bit data pointers and 32-bit function > pointers, or vice versa. > > Some 64-bit Cray machines represent int * in the lower 48 bits of a > word; char * additionally uses some of the upper 16 bits to indicate a > byte address within a word. > > Additional links: A message from Chris Torek with more details > about some of these machines.

Solution 2 - C

Not quite what you're asking, but back in the 16-bit DOS/Windows days, you did have the distinction between a pointer and a far-pointer, the latter being 32-bits.

I might have the syntax wrong...

int *pInt = malloc(sizeof(int));
int far *fpInt = _fmalloc(sizeof(int));

printf("pInt: %d, fpInt: %d\n", sizeof(pInt), sizeof(fpInt));

Output:

pInt: 2, fpInt 4

Solution 3 - C

Therefore, it follows logically that sizeof(void*) >= sizeof(T*) for all types T, correct?

That doesn't necessarily follow, since sizeof is about the storage representation, and not all bit-patterns have to be valid values. I think you could write a conformant implementation where sizeof(int*) == 8, sizeof(void*) == 4, but there are no more than 2^32 possible values for an int*. Not sure why you'd want to.

Solution 4 - C

Back in the golden years of DOS, 8088s and segmented memory, it was common to specify a "memory model" in which e.g. all code would fit into 64k (one segment) but data could span multiple segments; this meant that a function pointer would be 2 bytes, a data pointer, 4 bytes. Not sure if anybody is still programming for machines of that kind, maybe some still survive in embedded uses.

Solution 5 - C

One could easily imagine a Harvard architecture machine having different sizes for function pointers and all other pointers. Don't know of an example...

Solution 6 - C

Near and far pointers are still used on some embedded microcontrollers with paged flash or RAM, to allow you to point to data in the same page (near pointer), or another page (far pointer, which is larger because it includes page information).

For example, Freescale's HCS12 microcontroller uses a 16-bit Von Neumann architecture, which means that no address can be more than 16 bits. Because of the limitation this would put on the amount of code space available, there is an 8-bit page register.

So to point to data in the same code page, you just specify the 16-bit address; this is a near pointer.

To point to data in another code page, you have to include both the 8-bit page number and the 16-bit address within that page, resulting in a 24-bit far pointer.

Solution 7 - C

It is possible that the size of pointers to data differs from pointers to functions for example. It is common for this to occur in microprocessor for embedded system. Harvard architecture machines like dmckee mentioned makes this easy to happen.

It turns out that it makes gcc backends a pain to develop! :)

Edit: I can't go into the details of the specific machine I am talking about but let me add why Harvard machines make this easy to happen. The Harvard architecture has different storage and pathways to instructions and data, therefore if the bus for the instructions is 'larger' than the one for data, you're bound to have a function pointer whose size is bigger than a pointer to data!

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionAdam RosenfieldView Question on Stackoverflow
Solution 1 - CRobert S. BarnesView Answer on Stackoverflow
Solution 2 - CAric TenEyckView Answer on Stackoverflow
Solution 3 - CSteve JessopView Answer on Stackoverflow
Solution 4 - CAlex MartelliView Answer on Stackoverflow
Solution 5 - Cdmckee --- ex-moderator kittenView Answer on Stackoverflow
Solution 6 - CSteve MelnikoffView Answer on Stackoverflow
Solution 7 - CPaulo MatosView Answer on Stackoverflow