UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 20: ordinal not in range(128)

PythonUnicodeBeautifulsoupPython 2.xPython Unicode

Python Problem Overview


I'm having problems dealing with unicode characters from text fetched from different web pages (on different sites). I am using BeautifulSoup.

The problem is that the error is not always reproducible; it sometimes works with some pages, and sometimes, it barfs by throwing a UnicodeEncodeError. I have tried just about everything I can think of, and yet I have not found anything that works consistently without throwing some kind of Unicode-related error.

One of the sections of code that is causing problems is shown below:

agent_telno = agent.find('div', 'agent_contact_number')
agent_telno = '' if agent_telno is None else agent_telno.contents[0]
p.agent_info = str(agent_contact + ' ' + agent_telno).strip()

Here is a stack trace produced on SOME strings when the snippet above is run:

Traceback (most recent call last):
  File "foobar.py", line 792, in <module>
    p.agent_info = str(agent_contact + ' ' + agent_telno).strip()
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 20: ordinal not in range(128)

I suspect that this is because some pages (or more specifically, pages from some of the sites) may be encoded, whilst others may be unencoded. All the sites are based in the UK and provide data meant for UK consumption - so there are no issues relating to internalization or dealing with text written in anything other than English.

Does anyone have any ideas as to how to solve this so that I can CONSISTENTLY fix this problem?

Python Solutions


Solution 1 - Python

You need to read the Python Unicode HOWTO. This error is the very first example.

Basically, stop using str to convert from unicode to encoded text / bytes.

Instead, properly use .encode() to encode the string:

p.agent_info = u' '.join((agent_contact, agent_telno)).encode('utf-8').strip()

or work entirely in unicode.

Solution 2 - Python

This is a classic python unicode pain point! Consider the following:

a = u'bats\u00E0'
print a
 => batsà

All good so far, but if we call str(a), let's see what happens:

str(a)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe0' in position 4: ordinal not in range(128)

Oh dip, that's not gonna do anyone any good! To fix the error, encode the bytes explicitly with .encode and tell python what codec to use:

a.encode('utf-8')
 => 'bats\xc3\xa0'
print a.encode('utf-8')
 => batsà

Voil\u00E0!

The issue is that when you call str(), python uses the default character encoding to try and encode the bytes you gave it, which in your case are sometimes representations of unicode characters. To fix the problem, you have to tell python how to deal with the string you give it by using .encode('whatever_unicode'). Most of the time, you should be fine using utf-8.

For an excellent exposition on this topic, see Ned Batchelder's PyCon talk here: http://nedbatchelder.com/text/unipain.html

Solution 3 - Python

I found elegant work around for me to remove symbols and continue to keep string as string in follows:

yourstring = yourstring.encode('ascii', 'ignore').decode('ascii')

It's important to notice that using the ignore option is dangerous because it silently drops any unicode(and internationalization) support from the code that uses it, as seen here (convert unicode):

>>> u'City: Malmö'.encode('ascii', 'ignore').decode('ascii')
'City: Malm'

Solution 4 - Python

well i tried everything but it did not help, after googling around i figured the following and it helped. python 2.7 is in use.

# encoding=utf8
import sys
reload(sys)
sys.setdefaultencoding('utf8')

Solution 5 - Python

A subtle problem causing even print to fail is having your environment variables set wrong, eg. here LC_ALL set to "C". In Debian they discourage setting it: Debian wiki on Locale

$ echo $LANG
en_US.utf8
$ echo $LC_ALL 
C
$ python -c "print (u'voil\u00e0')"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe0' in position 4: ordinal not in range(128)
$ export LC_ALL='en_US.utf8'
$ python -c "print (u'voil\u00e0')"
voilà
$ unset LC_ALL
$ python -c "print (u'voil\u00e0')"
voilà

Solution 6 - Python

The problem is that you're trying to print a unicode character, but your terminal doesn't support it.

You can try installing language-pack-en package to fix that:

sudo apt-get install language-pack-en

which provides English translation data updates for all supported packages (including Python). Install different language package if necessary (depending which characters you're trying to print).

On some Linux distributions it's required in order to make sure that the default English locales are set-up properly (so unicode characters can be handled by shell/terminal). Sometimes it's easier to install it, than configuring it manually.

Then when writing the code, make sure you use the right encoding in your code.

For example:

open(foo, encoding='utf-8')

If you've still a problem, double check your system configuration, such as:

  • Your locale file (/etc/default/locale), which should have e.g.

      LANG="en_US.UTF-8"
      LC_ALL="en_US.UTF-8"
    

    or:

      LC_ALL=C.UTF-8
      LANG=C.UTF-8
    
  • Value of LANG/LC_CTYPE in shell.

  • Check which locale your shell supports by:

      locale -a | grep "UTF-8"
    

Demonstrating the problem and solution in fresh VM.

  1. Initialize and provision the VM (e.g. using vagrant):

     vagrant init ubuntu/trusty64; vagrant up; vagrant ssh
    

See: available Ubuntu boxes..

  1. Printing unicode characters (such as trade mark sign like ):

     $ python -c 'print(u"\u2122");'
     Traceback (most recent call last):
       File "<string>", line 1, in <module>
     UnicodeEncodeError: 'ascii' codec can't encode character u'\u2122' in position 0: ordinal not in range(128)
    
  2. Now installing language-pack-en:

     $ sudo apt-get -y install language-pack-en
     The following extra packages will be installed:
       language-pack-en-base
     Generating locales...
       en_GB.UTF-8... /usr/sbin/locale-gen: done
     Generation complete.
    
  3. Now problem should be solved:

     $ python -c 'print(u"\u2122");'
  4. Otherwise, try the following command:

     $ LC_ALL=C.UTF-8 python -c 'print(u"\u2122");'

Solution 7 - Python

In shell:

  1. Find supported UTF-8 locale by the following command:

     locale -a | grep "UTF-8"
    
  2. Export it, before running the script, e.g.:

     export LC_ALL=$(locale -a | grep UTF-8)
    

or manually like:

    export LC_ALL=C.UTF-8

3. Test it by printing special character, e.g. :

    python -c 'print(u"\u2122");'

Above tested in Ubuntu.

Solution 8 - Python

I've actually found that in most of my cases, just stripping out those characters is much simpler:

s = mystring.decode('ascii', 'ignore')

Solution 9 - Python

For me, what worked was:

BeautifulSoup(html_text,from_encoding="utf-8")

Hope this helps someone.

Solution 10 - Python

Here's a rehashing of some other so-called "cop out" answers. There are situations in which simply throwing away the troublesome characters/strings is a good solution, despite the protests voiced here.

def safeStr(obj):
    try: return str(obj)
    except UnicodeEncodeError:
        return obj.encode('ascii', 'ignore').decode('ascii')
    except: return ""

Testing it:

if __name__ == '__main__': 
    print safeStr( 1 ) 
    print safeStr( "test" ) 
    print u'98\xb0'
    print safeStr( u'98\xb0' )

Results:

1
test
98°
98

UPDATE: My original answer was written for Python 2. For Python 3:

def safeStr(obj):
    try: return str(obj).encode('ascii', 'ignore').decode('ascii')
    except: return ""

Note: if you'd prefer to leave a ? indicator where the "unsafe" unicode characters are, specify replace instead of ignore in the call to encode for the error handler.

Suggestion: you might want to name this function toAscii instead? That's a matter of preference...

Finally, here's a more robust PY2/3 version using six, where I opted to use replace, and peppered in some character swaps to replace fancy unicode quotes and apostrophes which curl left or right with the simple vertical ones that are part of the ascii set. You might expand on such swaps yourself:

from six import PY2, iteritems 

CHAR_SWAP = { u'\u201c': u'"'
			, u'\u201D': u'"' 
			, u'\u2018': u"'" 
			, u'\u2019': u"'" 
}

def toAscii( text ) :    
	try:
		for k,v in iteritems( CHAR_SWAP ): 
			text = text.replace(k,v)
	except: pass     
	try: return str( text ) if PY2 else bytes( text, 'replace' ).decode('ascii')
	except UnicodeEncodeError:
		return text.encode('ascii', 'replace').decode('ascii')
	except: return ""

if __name__ == '__main__':     
	print( toAscii( u'testin\u2019' ) )

Solution 11 - Python

Add line below at the beginning of your script ( or as second line):

# -*- coding: utf-8 -*-

That's definition of python source code encoding. More info in PEP 263.

Solution 12 - Python

I always put the code below in the first two lines of the python files:

# -*- coding: utf-8 -*-
from __future__ import unicode_literals

Solution 13 - Python

Alas this works in Python 3 at least...

Python 3

Sometimes the error is in the enviroment variables and enconding so

import os
import locale
os.environ["PYTHONIOENCODING"] = "utf-8"
myLocale=locale.setlocale(category=locale.LC_ALL, locale="en_GB.UTF-8")
... 
print(myText.encode('utf-8', errors='ignore'))

where errors are ignored in encoding.

Solution 14 - Python

It works for me:

export LC_CTYPE="en_US.UTF-8"

Solution 15 - Python

Simple helper functions found here.

def safe_unicode(obj, *args):
    """ return the unicode representation of obj """
    try:
        return unicode(obj, *args)
    except UnicodeDecodeError:
        # obj is byte string
        ascii_text = str(obj).encode('string_escape')
        return unicode(ascii_text)

def safe_str(obj):
    """ return the byte string representation of obj """
    try:
        return str(obj)
    except UnicodeEncodeError:
        # obj is unicode
        return unicode(obj).encode('unicode_escape')

Solution 16 - Python

Just add to a variable encode('utf-8')

agent_contact.encode('utf-8')

Solution 17 - Python

Please open terminal and fire the below command:

export LC_ALL="en_US.UTF-8"

Solution 18 - Python

I just used the following:

import unicodedata
message = unicodedata.normalize("NFKD", message)

Check what documentation says about it:

> unicodedata.normalize(form, unistr) Return the normal form form for > the Unicode string unistr. Valid values for form are ‘NFC’, ‘NFKC’, > ‘NFD’, and ‘NFKD’. > > The Unicode standard defines various normalization forms of a Unicode > string, based on the definition of canonical equivalence and > compatibility equivalence. In Unicode, several characters can be > expressed in various way. For example, the character U+00C7 (LATIN > CAPITAL LETTER C WITH CEDILLA) can also be expressed as the sequence > U+0043 (LATIN CAPITAL LETTER C) U+0327 (COMBINING CEDILLA). > > For each character, there are two normal forms: normal form C and > normal form D. Normal form D (NFD) is also known as canonical > decomposition, and translates each character into its decomposed form. > Normal form C (NFC) first applies a canonical decomposition, then > composes pre-combined characters again. > > In addition to these two forms, there are two additional normal forms > based on compatibility equivalence. In Unicode, certain characters are > supported which normally would be unified with other characters. For > example, U+2160 (ROMAN NUMERAL ONE) is really the same thing as U+0049 > (LATIN CAPITAL LETTER I). However, it is supported in Unicode for > compatibility with existing character sets (e.g. gb2312). > > The normal form KD (NFKD) will apply the compatibility decomposition, > i.e. replace all compatibility characters with their equivalents. The > normal form KC (NFKC) first applies the compatibility decomposition, > followed by the canonical composition. > > Even if two unicode strings are normalized and look the same to a > human reader, if one has combining characters and the other doesn’t, > they may not compare equal.

Solves it for me. Simple and easy.

Solution 19 - Python

Late answer, but this error is related to your terminal's encoding not supporting certain characters.
I fixed it on python3 using:

import sys
import io

sys.stdout = io.open(sys.stdout.fileno(), 'w', encoding='utf8')
print("é, à, ...")

Solution 20 - Python

Below solution worked for me, Just added

> u "String"

(representing the string as unicode) before my string.

result_html = result.to_html(col_space=1, index=False, justify={'right'})

text = u"""
<html>
<body>
<p>
Hello all, <br>
<br>
Here's weekly summary report.  Let me know if you have any questions. <br>
<br>
Data Summary <br>
<br>
<br>
{0}
</p>
<p>Thanks,</p>
<p>Data Team</p>
</body></html>
""".format(result_html)

Solution 21 - Python

In general case of writing this unsupported encoding string (let's say data_that_causes_this_error) to some file (for e.g. results.txt), this works

f = open("results.txt", "w")
  f.write(data_that_causes_this_error.encode('utf-8'))
  f.close()

Solution 22 - Python

In case its an issue with a print statement, a lot fo times its just an issue with the terminal printing. This helped me : export PYTHONIOENCODING=UTF-8

Solution 23 - Python

I just had this problem, and Google led me here, so just to add to the general solutions here, this is what worked for me:

# 'value' contains the problematic data
unic = u''
unic += value
value = unic

I had this idea after reading Ned's presentation.

I don't claim to fully understand why this works, though. So if anyone can edit this answer or put in a comment to explain, I'll appreciate it.

Solution 24 - Python

We struck this error when running manage.py migrate in Django with localized fixtures.

Our source contained the # -*- coding: utf-8 -*- declaration, MySQL was correctly configured for utf8 and Ubuntu had the appropriate language pack and values in /etc/default/locale.

The issue was simply that the Django container (we use docker) was missing the LANG env var.

Setting LANG to en_US.UTF-8 and restarting the container before re-running migrations fixed the problem.

Solution 25 - Python

Update for python 3.0 and later. Try the following in the python editor:

locale-gen en_US.UTF-8
export LANG=en_US.UTF-8 LANGUAGE=en_US.en
LC_ALL=en_US.UTF-8

This sets the system`s default locale encoding to the UTF-8 format.

More can be read here at PEP 538 -- Coercing the legacy C locale to a UTF-8 based locale.

Solution 26 - Python

The recommended solution did not work for me, and I could live with dumping all non ascii characters, so

s = s.encode('ascii',errors='ignore')

which left me with something stripped that doesn't throw errors.

Solution 27 - Python

Many answers here (@agf and @Andbdrew for example) have already addressed the most immediate aspects of the OP question.

However, I think there is one subtle but important aspect that has been largely ignored and that matters dearly for everyone who like me ended up here while trying to make sense of encodings in Python: Python 2 vs Python 3 management of character representation is wildly different. I feel like a big chunk of confusion out there has to do with people reading about encodings in Python without being version aware.

I suggest anyone interested in understanding the root cause of OP problem to begin by reading Spolsky's introduction to character representations and Unicode and then move to Batchelder on Unicode in Python 2 and Python 3.

Solution 28 - Python

Try to avoid conversion of variable to str(variable). Sometimes, It may cause the issue.

Simple tip to avoid :

try: 
    data=str(data)
except:
    data = data #Don't convert to String

The above example will solve Encode error also.

Solution 29 - Python

If you have something like packet_data = "This is data" then do this on the next line, right after initializing packet_data:

unic = u''
packet_data = unic

Solution 30 - Python

I had this issue trying to output Unicode characters to stdout, but with sys.stdout.write, rather than print (so that I could support output to a different file as well).

From BeautifulSoup's own documentation, I solved this with the codecs library:

import sys
import codecs

def main(fIn, fOut):
	soup = BeautifulSoup(fIn)
	# Do processing, with data including non-ASCII characters
	fOut.write(unicode(soup))

if __name__ == '__main__':
	with (sys.stdin) as fIn: # Don't think we need codecs.getreader here
		with codecs.getwriter('utf-8')(sys.stdout) as fOut:
			main(fIn, fOut)

Solution 31 - Python

This problem often happens when a django project deploys using Apache. Because Apache sets environment variable LANG=C in /etc/sysconfig/httpd. Just open the file and comment (or change to your flavior) this setting. Or use the lang option of the WSGIDaemonProcess command, in this case you will be able to set different LANG environment variable to different virtualhosts.

Solution 32 - Python

This will work:

 >>>print(unicodedata.normalize('NFD', re.sub("[\(\[].*?[\)\]]", "", "bats\xc3\xa0")).encode('ascii', 'ignore'))

Output:

>>>bats

Solution 33 - Python

You can set the character encoding to UTF-8 before running your script:

export LC_CTYPE="en_US.UTF-8"

This should generally resolve the issue.

Solution 34 - Python

You can you use unicodedata for avoid UnicodeEncodeError. here is an example:

import unicodedata

agent_telno = agent.find('div', 'agent_contact_number')
agent_telno = unicodedata.normalize("NFKD", agent_telno) #it will remove all unwanted character like '\xa0'
agent_telno = '' if agent_telno is None else agent_telno.contents[0]
p.agent_info = str(agent_contact + ' ' + agent_telno).strip()

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionHomunculus ReticulliView Question on Stackoverflow
Solution 1 - PythonagfView Answer on Stackoverflow
Solution 2 - PythonAndbdrewView Answer on Stackoverflow
Solution 3 - PythonMax KorolevskyView Answer on Stackoverflow
Solution 4 - PythonAshwinView Answer on Stackoverflow
Solution 5 - PythonmaxpolkView Answer on Stackoverflow
Solution 6 - PythonkenorbView Answer on Stackoverflow
Solution 7 - PythonkenorbView Answer on Stackoverflow
Solution 8 - PythonPhil LaNasaView Answer on Stackoverflow
Solution 9 - PythonAnimeshView Answer on Stackoverflow
Solution 10 - PythonBuvinJView Answer on Stackoverflow
Solution 11 - PythonAndriy IvaneykoView Answer on Stackoverflow
Solution 12 - PythonPereiraView Answer on Stackoverflow
Solution 13 - PythonhhhView Answer on Stackoverflow
Solution 14 - PythonCalebView Answer on Stackoverflow
Solution 15 - PythonParag TyagiView Answer on Stackoverflow
Solution 16 - PythonKairat KoibagarovView Answer on Stackoverflow
Solution 17 - PythonHồ Ngọc VượngView Answer on Stackoverflow
Solution 18 - PythonDrag0View Answer on Stackoverflow
Solution 19 - PythonPedro LobitoView Answer on Stackoverflow
Solution 20 - PythonAravind KrishnakumarView Answer on Stackoverflow
Solution 21 - PythonPe DroView Answer on Stackoverflow
Solution 22 - PythonDreamsView Answer on Stackoverflow
Solution 23 - PythonpepoluanView Answer on Stackoverflow
Solution 24 - PythonfollowbenView Answer on Stackoverflow
Solution 25 - PythonZF007View Answer on Stackoverflow
Solution 26 - PythonGulzarView Answer on Stackoverflow
Solution 27 - PythonSimón Ramírez AmayaView Answer on Stackoverflow
Solution 28 - Pythonsam rubenView Answer on Stackoverflow
Solution 29 - PythonNandan KulkarniView Answer on Stackoverflow
Solution 30 - PythonpalswimView Answer on Stackoverflow
Solution 31 - PythonshmakovpnView Answer on Stackoverflow
Solution 32 - PythonhuzefausamaView Answer on Stackoverflow
Solution 33 - PythonBabatunde AdeyemiView Answer on Stackoverflow
Solution 34 - PythonboyenecView Answer on Stackoverflow