Is TCHAR still relevant?

C++CWindowsUnicodeWchar T

C++ Problem Overview


I'm new to Windows programming and after reading the Petzold book I wonder:

is it still good practice to use the TCHAR type and the _T() function to declare strings or if I should just use the wchar_t and L"" strings in new code?

I will target only Windows 2000 and up and my code will be i18n from the start up.

C++ Solutions


Solution 1 - C++

The short answer: NO.

Like all the others already wrote, a lot of programmers still use TCHARs and the corresponding functions. In my humble opinion the whole concept was a bad idea. UTF-16 string processing is a lot different than simple ASCII/MBCS string processing. If you use the same algorithms/functions with both of them (this is what the TCHAR idea is based on!), you get very bad performance on the UTF-16 version if you are doing a little bit more than simple string concatenation (like parsing etc.). The main reason are Surrogates.

With the sole exception when you really have to compile your application for a system which doesn't support Unicode I see no reason to use this baggage from the past in a new application.

Solution 2 - C++

I have to agree with Sascha. The underlying premise of TCHAR / _T() / etc. is that you can write an "ANSI"-based application and then magically give it Unicode support by defining a macro. But this is based on several bad assumptions:

That you actively build both MBCS and Unicode versions of your software

Otherwise, you will slip up and use ordinary char* strings in many places.

That you don't use non-ASCII backslash escapes in _T("...") literals

Unless your "ANSI" encoding happens to be ISO-8859-1, the resulting char* and wchar_t* literals won't represent the same characters.

That UTF-16 strings are used just like "ANSI" strings

They're not. Unicode introduces several concepts that don't exist in most legacy character encodings. Surrogates. Combining characters. Normalization. Conditional and language-sensitive casing rules.

And perhaps most importantly, the fact that UTF-16 is rarely saved on disk or sent over the Internet: UTF-8 tends to be preferred for external representation.

That your application doesn't use the Internet

(Now, this may be a valid assumption for your software, but...)

The web runs on UTF-8 and a plethora of rarer encodings. The TCHAR concept only recognizes two: "ANSI" (which can't be UTF-8) and "Unicode" (UTF-16). It may be useful for making your Windows API calls Unicode-aware, but it's damned useless for making your web and e-mail apps Unicode-aware.

That you use no non-Microsoft libraries

Nobody else uses TCHAR. Poco uses std::string and UTF-8. SQLite has UTF-8 and UTF-16 versions of its API, but no TCHAR. TCHAR isn't even in the standard library, so no std::tcout unless you want to define it yourself.

What I recommend instead of TCHAR

Forget that "ANSI" encodings exist, except for when you need to read a file that isn't valid UTF-8. Forget about TCHAR too. Always call the "W" version of Windows API functions. #define _UNICODE just to make sure you don't accidentally call an "A" function.

Always use UTF encodings for strings: UTF-8 for char strings and UTF-16 (on Windows) or UTF-32 (on Unix-like systems) for wchar_t strings. typedef UTF16 and UTF32 character types to avoid platform differences.

Solution 3 - C++

If you're wondering if it's still in practice, then yes - it is still used quite a bit. No one will look at your code funny if it uses TCHAR and _T(""). The project I'm working on now is converting from ANSI to unicode - and we're going the portable (TCHAR) route.

However...

My vote would be to forget all the ANSI/UNICODE portable macros (TCHAR, _T(""), and all the _tXXXXXX calls, etc...) and just assume unicode everywhere. I really don't see the point of being portable if you'll never need an ANSI version. I would use all the wide character functions and types directly. Preprend all string literals with a L.

Solution 4 - C++

I would still use the TCHAR syntax if I was doing a new project today. There's not much practical difference between using it and the WCHAR syntax, and I prefer code which is explicit in what the character type is. Since most API functions and helper objects take/use TCHAR types (e.g.: CString), it just makes sense to use it. Plus it gives you flexibility if you decide to use the code in an ASCII app at some point, or if Windows ever evolves to Unicode32, etc.

If you decide to go the WCHAR route, I would be explicit about it. That is, use CStringW instead of CString, and casting macros when converting to TCHAR (eg: CW2CT).

That's my opinion, anyway.

Solution 5 - C++

The Introduction to Windows Programming article on MSDN says

> New applications should always call the Unicode versions (of the API). > > The TEXT and TCHAR macros are less useful today, because all applications should use Unicode.

I would stick to wchar_t and L"".

Solution 6 - C++

I would like to suggest a different approach (neither of the two).

To summarize, use char* and std::string, assuming UTF-8 encoding, and do the conversions to UTF-16 only when wrapping API functions.

More information and justification for this approach in Windows programs can be found in http://www.utf8everywhere.org.

Solution 7 - C++

TCHAR/WCHAR might be enough for some legacy projects. But for new applications, I would say NO.

All these TCHAR/WCHAR stuff are there because of historical reasons. TCHAR provides a seemly neat way (disguise) to switch between ANSI text encoding (MBCS) and Unicode text encoding (UTF-16). In the past, people did not have an understanding of the number of characters of all the languages in the world. They assumed 2 bytes were enough to represent all characters and thus having a fixed-length character encoding scheme using WCHAR. However, this is no longer true after the release of Unicode 2.0 in 1996.

That is to say: No matter which you use in CHAR/WCHAR/TCHAR, the text processing part in your program should be able to handle variable length characters for internationalization.

So you actually need to do more than choosing one from CHAR/WCHAR/TCHAR for programming in Windows:

  1. If your application is small and does not involve text processing (i.e. just passing around the text string as arguments), then stick with WCHAR. Since it is easier this way to work with WinAPI with Unicode support.
  2. Otherwise, I would suggest using UTF-8 as internal encoding and store texts in char strings or std::string. And covert them to UTF-16 when calling WinAPI. UTF-8 is now the dominant encoding and there are lots of handy libraries and tools to process UTF-8 strings.

Check out this wonderful website for more in-depth reading: http://utf8everywhere.org/

Solution 8 - C++

Yes, absolutely; at least for the _T macro. I'm not so sure about the wide-character stuff, though.

The reason being is to better support WinCE or other non-standard Windows platforms. If you're 100% certain that your code will remain on NT, then you can probably just use regular C-string declarations. However, it's best to tend towards the more flexible approach, as it's much easier to #define that macro away on a non-windows platform in comparison to going through thousands of lines of code and adding it everywhere in case you need to port some library to windows mobile.

Solution 9 - C++

IMHO, if there's TCHARs in your code, you're working at the wrong level of abstraction.

Use whatever string type is most convenient for you when dealing with text processing - this will hopefully be something supporting unicode, but that's up to you. Do conversion at OS API boundaries as necessary.

When dealing with file paths, whip up your own custom type instead of using strings. This will allow you OS-independent path separators, will give you an easier interface to code against than manual string concatenation and splitting, and will be a lot easier to adapt to different OSes (ansi, ucs-2, utf-8, whatever).

Solution 10 - C++

The only reasons I see to use anything other than the explicit WCHAR are portability and efficiency.

If you want to make your final executable as small as possible use char.

If you don't care about RAM usage and want internationalization to be as easy as simple translation, use WCHAR.

If you want to make your code flexible, use TCHAR.

If you only plan on using the Latin characters, you might as well use the ASCII/MBCS strings so that your user does not need as much RAM.

For people who are "i18n from the start up", save yourself the source code space and simply use all of the Unicode functions.

Solution 11 - C++

Just adding to an old question:

NO

Go start a new CLR C++ project in VS2010. Microsoft themselves use L"Hello World", 'nuff said.

Solution 12 - C++

TCHAR have a new meaning to port from WCHAR to CHAR.

https://docs.microsoft.com/en-us/windows/uwp/design/globalizing/use-utf8-code-page

> Recent releases of Windows 10 have used the ANSI code page and -A > APIs as a means to introduce UTF-8 support to apps. If the ANSI code > page is configured for UTF-8, -A APIs operate in UTF-8.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionFábioView Question on Stackoverflow
Solution 1 - C++SaschaView Answer on Stackoverflow
Solution 2 - C++dan04View Answer on Stackoverflow
Solution 3 - C++AardvarkView Answer on Stackoverflow
Solution 4 - C++NickView Answer on Stackoverflow
Solution 5 - C++StevenView Answer on Stackoverflow
Solution 6 - C++Pavel RadzivilovskyView Answer on Stackoverflow
Solution 7 - C++LeOpArDView Answer on Stackoverflow
Solution 8 - C++Nik ReimanView Answer on Stackoverflow
Solution 9 - C++snemarchView Answer on Stackoverflow
Solution 10 - C++TrolololView Answer on Stackoverflow
Solution 11 - C++kizzx2View Answer on Stackoverflow
Solution 12 - C++OwnageIsMagicView Answer on Stackoverflow