How to reverse String.fromCharCode?
JavascriptCharReverseJavascript Problem Overview
String.fromCharCode(72) gives H. How to get number 72 from char H?
Javascript Solutions
Solution 1 - Javascript
'H'.charCodeAt(0)
Solution 2 - Javascript
Use charCodeAt:
var str = 'H';
var charcode = str.charCodeAt(0);
Solution 3 - Javascript
@Silvio's answer is only true for code points up to 0xFFFF (which in the end is the maximum that String.fromCharCode can output). You can't always assume the length of a character is one:
'𐌰'.length
-> 2
Here's something that works:
var utf16ToDig = function(s) {
var length = s.length;
var index = -1;
var result = "";
var hex;
while (++index < length) {
hex = s.charCodeAt(index).toString(16).toUpperCase();
result += ('0000' + hex).slice(-4);
}
return parseInt(result, 16);
}
Using it:
utf16ToDig('𐌰').toString(16)
-> "d800df30"
(Inspiration from https://mothereff.in/utf-8)
Solution 4 - Javascript
You can define your own global functions like this:
function CHR(ord)
{
return String.fromCharCode(ord);
}
function ORD(chr)
{
return chr.charCodeAt(0);
}
Then use them like this:
var mySTR = CHR(72);
or
var myNUM = ORD('H');
(If you want to use them more than once, and/or a lot in your code.)
Solution 5 - Javascript
String.fromCharCode
accepts multiple arguments, so this is valid:
const binaryArray = [10, 24] // ...
str = String.fromCharCode(...binaryArray)
In case you're looking for the opposite of that (like I was), this might come in handy:
const binaryArray = str
.split('')
.reduce((acc, next) =>
[...acc, next.charCodeAt(0)],
[]
)
Solution 6 - Javascript
If you have to consider encoding, note that the charCodeAt
method from String
object will default to the utf16 encoding. You can use the string_decoder
object and the encoding parameter of the Node.js Buffer
object to apply a specific encoding.
The charCodeAt
method provides only ascii/utf-16 encoding even with utf-8 encoded String
object :
str=new String(new Buffer([195,169]))
// -> [String: 'é']
str.charCodeAt(0)
// -> 233
Building the string with fromCharCode
would not be right as it expects UTF-16 :
myStr=String.fromCharCode(50089, 65, 233)
// -> '쎩Aé'
Buffer.from(myStr, 'utf-8')
// -> <Buffer ec 8e a9 41 c3 a9>
Buffer.from(myStr, 'ascii')
// -> <Buffer a9 41 e9>
myStr.charCodeAt(0)
// -> 50089
myStr.charCodeAt(2)
// -> 233
Here is a simple example to encode and decode UTF8 bytes and ASCII bytes :
var dec=new string_decoder.StringDecoder('utf-8');
dec.write(Buffer.from([65,67,195,169,98]));
// -> 'ACéb'
var theBytes=new Buffer('aéé','utf-8');
// -> <Buffer 61 c3 a9 c3 a9>
var dec=new string_decoder.StringDecoder('ascii')
dec.write(Buffer.from([65,67,195,169,98]))
// -> 'ACC)b'
var theBytes=new Buffer('aéé','ascii')
// -> <Buffer 61 e9 e9>
The StringDecoder.write
method will return you a String
object from the provided bytes buffer.
The encoding parameter of the Buffer
object gives you a way to get a Buffer
object feed with the encoded bytes from a provided String.
So to get the ASCII encoding of char 'H' :
new Buffer('H','ascii');
// -> <Buffer 48>
new Buffer('H','ascii')[0];
// -> 72
That way, you could also handle multi bytes encoded chars like that:
new Buffer('é','ascii');
// -> <Buffer e9>
arr=new Buffer('é','ascii');
// -> <Buffer e9>
code=arr.readUIntBE(0,arr.length);
// -> 233
code.toString(16);
// -> 'e9'
new Buffer('é','utf-8');
// -> <Buffer c3 a9>
arr=new Buffer('é','utf-8');
// -> <Buffer c3 a9>
code=arr.readUIntBE(0,arr.length);
// -> 50089
code.toString(16)
// -> 'c3a9'