What is the idea behind ^= 32, that converts lowercase letters to upper and vice versa?

C++Bit ManipulationAscii

C++ Problem Overview


I was solving some problem on codeforces. Normally I first check if the character is upper or lower English letter then subtract or add 32 to convert it to the corresponding letter. But I found someone do ^= 32 to do the same thing. Here it is:

char foo = 'a';
foo ^= 32;
char bar = 'A';
bar ^= 32;
cout << foo << ' ' << bar << '\n'; // foo is A, and bar is a

I have searched for an explanation for this and didn't find out. So why this works?

C++ Solutions


Solution 1 - C++

Let's take a look at ASCII code table in binary.

A 1000001    a 1100001
B 1000010    b 1100010
C 1000011    c 1100011
...
Z 1011010 	 z 1111010

And 32 is 0100000 which is the only difference between lowercase and uppercase letters. So toggling that bit toggles the case of a letter.

Solution 2 - C++

This uses the fact than ASCII values have been chosen by really smart people.

foo ^= 32;

This [flips the 6th lowest bit](https://stackoverflow.com/q/47981/5470596 "How do you set, clear, and toggle a single bit?")1 of foo (the uppercase flag of ASCII sort of), transforming an ASCII upper case to a lower case and vice-versa.

+---+------------+------------+
|   | Upper case | Lower case |  32 is 00100000
+---+------------+------------+
| A | 01000001   | 01100001   |
| B | 01000010   | 01100010   |
|            ...              |
| Z | 01011010   | 01111010   |
+---+------------+------------+
Example
'A' ^ 32

    01000001 'A'
XOR 00100000 32
------------
    01100001 'a'

And by property of XOR, 'a' ^ 32 == 'A'.

Notice

C++ is not required to use ASCII to represent characters. Another variant is EBCDIC. This trick only works on ASCII platforms. A more portable solution would be to use std::tolower and std::toupper, with the offered bonus to be locale-aware (it does not automagically solve all your problems though, see comments):

bool case_incensitive_equal(char lhs, char rhs)
{
    return std::tolower(lhs, std::locale{}) == std::tolower(rhs, std::locale{}); // std::locale{} optional, enable locale-awarness
}

assert(case_incensitive_equal('A', 'a'));

1) As 32 is 1 << 5 (2 to the power 5), it flips the 6th bit (counting from 1).

Solution 3 - C++

Allow me to say that this is -- although it seems smart -- a really, really stupid hack. If someone recommends this to you in 2019, hit him. Hit him as hard as you can.
You can, of course, do it in your own software that you and nobody else uses if you know that you will never use any language but English anyway. Otherwise, no go.

The hack was arguable "OK" some 30-35 years ago when computers didn't really do much but English in ASCII, and maybe one or two major European languages. But... no longer so.

The hack works because US-Latin upper- and lowercases are exactly 0x20 apart from each other and appear in the same order, which is just one bit of difference. Which, in fact, this bit hack, toggles.

Now, the people creating code pages for Western Europe, and later the Unicode consortium, were smart enough to keep this scheme for e.g. German Umlauts and French-accented Vowels. Not so for ß which (until someone convinced the Unicode consortium in 2017, and a large Fake News print magazine wrote about it, actually convincing the Duden -- no comment on that) don't even exist as a versal (transforms to SS). Now it does exist as versal, but the two are 0x1DBF positions apart, not 0x20.

The implementors were, however, not considerate enough to keep this going. For example, if you apply your hack in some East European languages or the like (I wouldn't know about Cyrillic), you will get a nasty surprise. All those "hatchet" characters are examples of that, lowercase and uppercase are one apart. The hack thus does not work properly there.

There's much more to consider, for example, some characters do not simply transform from lower- to uppercase at all (they're replaced with different sequences), or they may change form (requiring different code points).

Do not even think about what this hack will do to stuff like Thai or Chinese (it'll just give you complete nonsense).

Saving a couple of hundred CPU cycles may have been very worthwhile 30 years ago, but nowadays, there is really no excuse for converting a string properly. There are library functions for performing this non-trivial task.
The time taken to convert several dozens kilobytes of text properly is negligible nowadays.

Solution 4 - C++

It works because, as it happens, the difference between 'a' and A' in ASCII and derived encodings is 32, and 32 is also the value of the sixth bit. Flipping the 6th bit with an exclusive OR thus converts between upper and lower.

Solution 5 - C++

Most likely your implementation of the character set will be ASCII. If we look at the table:

enter image description here

We see that there's a difference of exactly 32 between the value of a lowercase and uppercase number. Therefore, if we do ^= 32 (which equates to toggling the 6th least significant bit), it changes between a lowercase and uppercase character.

Note that it works with all the symbols, not just the letters. It toggles a character with the respective character where the 6th bit is different, resulting in a pair of characters that is toggled back and forth between. For the letters, the respective upper/lowercase characters form such a pair. A NUL will change into Space and the other way around, and the @ toggles with the backtick. Basically any character in the first column on this chart toggles with the character one column over, and the same applies to the third and fourth columns.

I wouldn't use this hack though, as there's not guarantee that it's going to work on any system. Just use toupper and tolower instead, and queries such as isupper.

Solution 6 - C++

Plenty of good answers here that describe how this works, but why it works this way is to improve performance. Bitwise operations are faster than most other operations within a processor. You can quickly do a case insensitive comparison by simply not looking at the bit that determines case or change case to upper/lower simply by flipping the bit (those guys that designed the ASCII table were pretty smart).

Obviously, this isn't nearly as big of a deal today as it was back in 1960 (when work first began on ASCII) due to faster processors and Unicode, but there are still some low-cost processors that this could make a significant difference as long as you can guarantee only ASCII characters.

https://en.wikipedia.org/wiki/Bitwise_operation

> On simple low-cost processors, typically, bitwise operations are > substantially faster than division, several times faster than > multiplication, and sometimes significantly faster than addition.

NOTE: I would recommend using standard libraries for working with strings for a number of reasons (readability, correctness, portability, etc). Only use bit flipping if you have measured performance and this is your bottleneck.

Solution 7 - C++

It's how ASCII works, that's all.

But in exploiting this, you are giving up portability as C++ doesn't insist on ASCII as the encoding.

This is why the functions std::toupper and std::tolower are implemented in the C++ standard library - you should use those instead.

Solution 8 - C++

See the second table at http://www.catb.org/esr/faqs/things-every-hacker-once-knew/#_ascii, and following notes, reproduced below:

> The Control modifier on your keyboard basically clears the top three bits of whatever character you type, leaving the bottom five and mapping it to the 0..31 range. So, for example, Ctrl-SPACE, Ctrl-@, and Ctrl-` all mean the same thing: NUL. > > Very old keyboards used to do Shift just by toggling the 32 or 16 bit, depending on the key; this is why the relationship between small and capital letters in ASCII is so regular, and the relationship between numbers and symbols, and some pairs of symbols, is sort of regular if you squint at it. The ASR-33, which was an all-uppercase terminal, even let you generate some punctuation characters it didn’t have keys for by shifting the 16 bit; thus, for example, Shift-K (0x4B) became a [ (0x5B)

ASCII was designed such that the shift and ctrl keyboard keys could be implemented without much (or perhaps any for ctrl) logic - shift probably required only a few gates. It probably made at least as much sense to store the wire protocol as any other character encoding (no software conversion required).

The linked article also explains many strange hacker conventions such as And control H does a single character and is an old^H^H^H^H^H classic joke. (found here).

Solution 9 - C++

Xoring with 32 (00100000 in binary) sets or resets the sixth bit (from the right). This is strictly equivalent to adding or subtracting 32.

Solution 10 - C++

The lower-case and upper-case alphabetic ranges don't cross a %32 "alignment" boundary in the ASCII coding system.

This is why bit 0x20 is the only difference between the upper/lower case versions of the same letter.

If this wasn't the case, you'd need to add or subtract 0x20, not just toggle, and for some letters there would be carry-out to flip other higher bits. (And there wouldn't be a single operation that could toggle, and checking for alphabetic characters in the first place would be harder because you couldn't |= 0x20 to force lcase.)


Related ASCII-only tricks: you can check for an alphabetic ASCII character by forcing lowercase with c |= 0x20 and then checking if (unsigned) c - 'a' <= ('z'-'a'). So just 3 operations: OR + SUB + CMP against a constant 25. Of course, compilers know how to optimize (c>='a' && c<='z') into asm like this for you, so at most you should do the c|=0x20 part yourself. It's rather inconvenient to do all the necessary casting yourself, especially to work around default integer promotions to signed int.

unsigned char lcase = y|0x20;
if (lcase - 'a' <= (unsigned)('z'-'a')) {   // lcase-'a' will wrap for characters below 'a'
    // c is alphabetic ASCII
}
// else it's not

Or to put it another way:

 unsigned char lcase = y|0x20;
 unsigned char alphabet_idx = lcase - 'a';   // 0-index position in the alphabet
 bool alpha = alphabet_idx <= (unsigned)('z'-'a');

See also https://stackoverflow.com/questions/735204/convert-a-string-in-c-to-upper-case/37151084#37151084 (SIMD string toupper for ASCII only, masking the operand for XOR using that check.)

And also https://stackoverflow.com/questions/35932273/how-to-access-a-char-array-and-change-lower-case-letters-to-upper-case-and-vice/35936844#35936844 (C with SIMD intrinsics, and scalar x86 asm case-flip for alphabetic ASCII characters, leaving others unmodified.)


These tricks are mostly only useful if hand-optimizing some text-processing with SIMD (e.g. SSE2 or NEON), after checking that none of the chars in a vector have their high bit set. (And thus none of the bytes are part of a multi-byte UTF-8 encoding for a single character, which might have different upper/lower-case inverses). If you find any, you can fall back to scalar for this chunk of 16 bytes, or for the rest of the string.

There are even some locales where toupper() or tolower() on some characters in the ASCII range produce characters outside that range, notably Turkish where I ↔ ı and İ ↔ i. In those locales, you'd need a more sophisticated check, or probably not trying to use this optimization at all.


But in some cases, you're allowed to assume ASCII instead of UTF-8, e.g. Unix utilities with LANG=C (the POSIX locale), not en_CA.UTF-8 or whatever.

But if you can verify it's safe, you can toupper medium-length strings much faster than calling toupper() in a loop (like 5x), and last I tested with Boost 1.58, much much faster than boost::to_upper_copy<char*, std::string>() which does a stupid dynamic_cast for every character.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionDevon View Question on Stackoverflow
Solution 1 - C++Hanjoung LeeView Answer on Stackoverflow
Solution 2 - C++YSCView Answer on Stackoverflow
Solution 3 - C++DamonView Answer on Stackoverflow
Solution 4 - C++Jack AidleyView Answer on Stackoverflow
Solution 5 - C++BlazeView Answer on Stackoverflow
Solution 6 - C++BrianView Answer on Stackoverflow
Solution 7 - C++BathshebaView Answer on Stackoverflow
Solution 8 - C++IiridaynView Answer on Stackoverflow
Solution 9 - C++Yves DaoustView Answer on Stackoverflow
Solution 10 - C++Peter CordesView Answer on Stackoverflow