How can I get the Unicode code point(s) of a Character?

UnicodeSwift

Unicode Problem Overview


How can I extract the Unicode code point(s) of a given Character without first converting it to a String? I know that I can use the following:

let ch: Character = "A"
let s = String(ch).unicodeScalars
s[s.startIndex].value // returns 65

but it seems like there should be a more direct way to accomplish this using just Swift's standard library. The Language Guide sections "Working with Characters" and "Unicode" only discuss iterating through the characters in a String, not working directly with Characters.

Unicode Solutions


Solution 1 - Unicode

From what I can gather in the documentation, they want you to get Character values from a String because it gives context. Is this Character encoded with UTF8, UTF16, or 21-bit code points (scalars)?

If you look at how a Character is defined in the Swift framework, it is actually an enum value. This is probably done due to the various representations from String.utf8, String.utf16, and String.unicodeScalars.

It seems they do not expect you to work with Character values but rather Strings and you as the programmer decide how to get these from the String itself, allowing encoding to be preserved.

That said, if you need to get the code points in a concise manner, I would recommend an extension like such:

extension Character
{
    func unicodeScalarCodePoint() -> UInt32
    {
        let characterString = String(self)
        let scalars = characterString.unicodeScalars
        
        return scalars[scalars.startIndex].value
    }
}

Then you can use it like so:

let char : Character = "A"
char.unicodeScalarCodePoint()

In summary, string and character encoding is a tricky thing when you factor in all the possibilities. In order to allow each possibility to be represented, they went with this scheme.

Also remember this is a 1.0 release, I'm sure they will expand Swift's syntactical sugar soon.

Solution 2 - Unicode

I think there are some misunderstandings about the Unicode. Unicode itself is NOT an encoding, it does not transform any grapheme clusters (or "Characters" from human reading respect) into any sort of binary sequence. The Unicode is just a big table which collects all the grapheme clusters used by all languages on Earth (unofficially also includes the Klingon). Those grapheme clusters are organized and indexed by the code points (a 21-bit number in swift, and looks like U+D800). You can find where the character you are looking for in the big Unicode table by using the code points

Meanwhile, the protocol called UTF8, UTF16, UTF32 is actually encodings. Yes, there are more than one ways to encode the Unicode characters into binary sequences. Using which protocol depends on the project you are working, but most of the web page is encoded by UTF-8 (you can actually check it now).

Concept 1: The Unicode point is called the Unicode Scalar in Swift

> A Unicode scalar is any Unicode code point in the range U+0000 to U+D7FF inclusive or U+E000 to U+10FFFF inclusive. Unicode scalars do not include the Unicode surrogate pair code points, which are the code points in the range U+D800 to U+DFFF inclusive.

Concept 2: The Code Unit is the abstract representation of the encoding.

Consider the following code snippet

let theCat = "Cat!🐱"

for char in theCat.utf8 {
    print("\(char) ", terminator: "") //Code Unit of each grapheme cluster for the UFT8 encoding
}
print("")
for char in theCat.utf8 {
    print("\(String(char, radix: 2)) ", terminator: "") //Encoding of each grapheme cluster for the UTF8 encoding
}
print("")


for char in theCat.utf16 {
    print("\(char) ", terminator: "") //Code Unit of each grapheme cluster for the UFT-16 encoding
}
print("")
for char in theCat.utf16 {
    print("\(String(char, radix: 2)) ", terminator: "") //Encoding of each grapheme cluster for the UTF-16 encoding
}
print("")

for char in theCat.unicodeScalars {
    print("\(char.value) ", terminator: "") //Code Unit of each grapheme cluster for the UFT-32 encoding
}
print("")
for char in theCat.unicodeScalars {
    print("\(String(char.value, radix: 2)) ", terminator: "") //Encoding of each grapheme cluster for the UTF-32 encoding
}

Abstract representation means: Code unit is written by the base-10 number (decimal number) it equals to the base-2 encoding (binary sequence). Encoding is made for the machines, Code Unit is more for humans, it is easy to read than binary sequences.

Concept 3: A character may have different Unicode point(s). It depends on how the character is contracted by what grapheme clusters, (this is why I said "Characters" from human reading respect in the beginning)

consider the following code snippet

let precomposed: String = "\u{D55C}"
let decomposed: String = "\u{1112}\u{1161}\u{11AB}" 
print(precomposed.characters.count) // print "1"
print(decomposed.characters.count) // print "1" => Character != grapheme cluster
print(precomposed) //print "한"
print(decomposed) //print "한"

The character precomposed and decomposed is visually and linguistically equal, But they have different Unicode point and different code unit if they encoded by the same encoding protocol (see the following example)

for preCha in precomposed.utf16 {
    print("\(preCha) ", terminator: "") //print 55357 56374 128054 54620
}

print("")

for deCha in decomposed.utf16 {
    print("\(deCha) ", terminator: "") //print 4370 4449 4523
}

Extra example

var word = "cafe"
print("the number of characters in \(word) is \(word.characters.count)")

word += "\u{301}"

print("the number of characters in \(word) is \(word.characters.count)")

Summary: Code Points, A.k.a the position index of the characters in Unicode, has nothing to do with UTF-8, UTF-16 and UTF-32 encoding schemes.

Further Readings:

http://www.joelonsoftware.com/articles/Unicode.html

http://kunststube.net/encoding/

https://www.mikeash.com/pyblog/friday-qa-2015-11-06-why-is-swifts-string-api-so-hard.html

Solution 3 - Unicode

I think the issue is that Character doesn't represent a Unicode code point. It represents a "Unicode grapheme cluster", which can consist of multiple code points.

Instead, UnicodeScalar represents a Unicode code point.

Solution 4 - Unicode

I agree with you, there should be a way to get the code directly from character. But all I can offer is a shorthand:

let ch: Character = "A"
for code in String(ch).utf8 { println(code) }

Solution 5 - Unicode

#1. Using Unicode.Scalar's value property

With Swift 5, Unicode.Scalar has a value property that has the following declaration:

>A numeric representation of the Unicode scalar.

var value: UInt32 { get }

The following Playground sample code shows how to iterate over the unicodeScalars property of a Character and print the value of each Unicode scalar that composes it:

let character: Character = "A"
for scalar in character.unicodeScalars {
    print(scalar.value)
}

/*
 prints: 65
 */

As an alternative, you can use the sample code below if you only want to print the value of the first unicode scalar of a Character:

let character: Character = "A"
let scalars = character.unicodeScalars
let firstScalar = scalars[scalars.startIndex]
print(firstScalar.value)

/*
 prints: 65
 */

#2. Using Character's asciiValue property

If what you really want is to get the ASCII encoding value of a character, you can use Character's asciiValue. asciiValue has the following declaration:

>Returns the ASCII encoding value of this Character, if ASCII.

var asciiValue: UInt8? { get }

The Playground sample code below show how to use asciiValue:

let character: Character = "A"
print(String(describing: character.asciiValue))

/*
 prints: Optional(65)
 */

let character: Character = "П"
print(String(describing: character.asciiValue))

/*
 prints: nil
 */

Solution 6 - Unicode

Have you tried:

import Foundation

let characterString: String = "abc"
var numbers: [Int] = Array<Int>()
for character in characterString.utf8 {
    let stringSegment: String = "\(character)"
    let anInt: Int = stringSegment.toInt()!
    numbers.append(anInt)
}

numbers

###Output:

>[97, 98, 99]

It may also be only one Character in the String.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionnathanView Question on Stackoverflow
Solution 1 - UnicodeErikView Answer on Stackoverflow
Solution 2 - UnicodeSLNView Answer on Stackoverflow
Solution 3 - UnicodenewacctView Answer on Stackoverflow
Solution 4 - UnicodeevpozdniakovView Answer on Stackoverflow
Solution 5 - UnicodeImanou PetitView Answer on Stackoverflow
Solution 6 - UnicodeBinarianView Answer on Stackoverflow