How to read a single character at a time from a file in Python?

PythonFile IoCharacter

Python Problem Overview


Can anyone tell me how can I do this?

Python Solutions


Solution 1 - Python

with open(filename) as f:
  while True:
    c = f.read(1)
    if not c:
      print "End of file"
      break
    print "Read a character:", c

Solution 2 - Python

First, open a file:

with open("filename") as fileobj:
    for line in fileobj:  
       for ch in line: 
           print(ch)

This goes through every line in the file and then every character in that line.

Solution 3 - Python

I like the accepted answer: it is straightforward and will get the job done. I would also like to offer an alternative implementation:

def chunks(filename, buffer_size=4096):
    """Reads `filename` in chunks of `buffer_size` bytes and yields each chunk
    until no more characters can be read; the last chunk will most likely have
    less than `buffer_size` bytes.

    :param str filename: Path to the file
    :param int buffer_size: Buffer size, in bytes (default is 4096)
    :return: Yields chunks of `buffer_size` size until exhausting the file
    :rtype: str

    """
    with open(filename, "rb") as fp:
        chunk = fp.read(buffer_size)
        while chunk:
            yield chunk
            chunk = fp.read(buffer_size)

def chars(filename, buffersize=4096):
    """Yields the contents of file `filename` character-by-character. Warning:
    will only work for encodings where one character is encoded as one byte.

    :param str filename: Path to the file
    :param int buffer_size: Buffer size for the underlying chunks,
    in bytes (default is 4096)
    :return: Yields the contents of `filename` character-by-character.
    :rtype: char

    """
    for chunk in chunks(filename, buffersize):
        for char in chunk:
            yield char

def main(buffersize, filenames):
    """Reads several files character by character and redirects their contents
    to `/dev/null`.

    """
    for filename in filenames:
        with open("/dev/null", "wb") as fp:
            for char in chars(filename, buffersize):
                fp.write(char)

if __name__ == "__main__":
    # Try reading several files varying the buffer size
    import sys
    buffersize = int(sys.argv[1])
    filenames  = sys.argv[2:]
    sys.exit(main(buffersize, filenames))

The code I suggest is essentially the same idea as your accepted answer: read a given number of bytes from the file. The difference is that it first reads a good chunk of data (4006 is a good default for X86, but you may want to try 1024, or 8192; any multiple of your page size), and then it yields the characters in that chunk one by one.

The code I present may be faster for larger files. Take, for example, the entire text of War and Peace, by Tolstoy. These are my timing results (Mac Book Pro using OS X 10.7.4; so.py is the name I gave to the code I pasted):

$ time python so.py 1 2600.txt.utf-8
python so.py 1 2600.txt.utf-8  3.79s user 0.01s system 99% cpu 3.808 total
$ time python so.py 4096 2600.txt.utf-8
python so.py 4096 2600.txt.utf-8  1.31s user 0.01s system 99% cpu 1.318 total

Now: do not take the buffer size at 4096 as a universal truth; look at the results I get for different sizes (buffer size (bytes) vs wall time (sec)):

   2 2.726 
   4 1.948 
   8 1.693 
  16 1.534 
  32 1.525 
  64 1.398 
 128 1.432 
 256 1.377 
 512 1.347 
1024 1.442 
2048 1.316 
4096 1.318 

As you can see, you can start seeing gains earlier on (and my timings are likely very inaccurate); the buffer size is a trade-off between performance and memory. The default of 4096 is just a reasonable choice but, as always, measure first.

Solution 4 - Python

Just:

myfile = open(filename)
onecharacter = myfile.read(1)

Solution 5 - Python

Python itself can help you with this, in interactive mode:

>>> help(file.read)
Help on method_descriptor:

read(...)
    read([size]) -> read at most size bytes, returned as a string.

    If the size argument is negative or omitted, read until EOF is reached.
    Notice that when in non-blocking mode, less data than what was requested
    may be returned, even if no size parameter was given.

Solution 6 - Python

I learned a new idiom for this today while watching Raymond Hettinger's Transforming Code into Beautiful, Idiomatic Python:

import functools

with open(filename) as f:
    f_read_ch = functools.partial(f.read, 1)
    for ch in iter(f_read_ch, ''):
        print 'Read a character:', repr(ch) 

Solution 7 - Python

Just read a single character

f.read(1)

Solution 8 - Python

This will also work:

with open("filename") as fileObj:
    for line in fileObj:  
        for ch in line:
            print(ch)

It goes through every line in the the file and every character in every line.

(Note that this post now looks extremely similar to a highly upvoted answer, but this was not the case at the time of writing.)

Solution 9 - Python

Best answer for Python 3.8+:

with open(path, encoding="utf-8") as f:
    while c := f.read(1):
        do_my_thing(c)

You may want to specify utf-8 and avoid the platform encoding. I've chosen to do that here.

Function – Python 3.8+:

def stream_file_chars(path: str):
    with open(path) as f:
        while c := f.read(1):
            yield c

Function – Python<=3.7:

def stream_file_chars(path: str):
    with open(path, encoding="utf-8") as f:
        while True:
            c = f.read(1)
            if c == "":
                break
            yield c

Function – pathlib + documentation:

from pathlib import Path
from typing import Union, Generator

def stream_file_chars(path: Union[str, Path]) -> Generator[str, None, None]:
    """Streams characters from a file."""
    with Path(path).open(encoding="utf-8") as f:
        while (c := f.read(1)) != "":
            yield c

Solution 10 - Python

You should try f.read(1), which is definitely correct and the right thing to do.

Solution 11 - Python

f = open('hi.txt', 'w')
f.write('0123456789abcdef')
f.close()
f = open('hej.txt', 'r')
f.seek(12)
print f.read(1) # This will read just "c"

Solution 12 - Python

To make a supplement, if you are reading file that contains a line that is vvvvery huge, which might break your memory, you might consider read them into a buffer then yield the each char

def read_char(inputfile, buffersize=10240):
    with open(inputfile, 'r') as f:
        while True:
            buf = f.read(buffersize)
            if not buf:
                break
            for char in buf:
                yield char
        yield '' #handle the scene that the file is empty

if __name__ == "__main__":
    for word in read_char('./very_large_file.txt'):
        process(char)

Solution 13 - Python

os.system("stty -icanon -echo")
while True:
    raw_c = sys.stdin.buffer.peek()
    c = sys.stdin.read(1)
    print(f"Char: {c}")

Solution 14 - Python

#reading out the file at once in a list and then printing one-by-one
f=open('file.txt')
for i in list(f.read()):
    print(i)

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionkaushikView Question on Stackoverflow
Solution 1 - PythonjchlView Answer on Stackoverflow
Solution 2 - PythonRajView Answer on Stackoverflow
Solution 3 - PythonEscualoView Answer on Stackoverflow
Solution 4 - PythonjoaquinView Answer on Stackoverflow
Solution 5 - PythonMattias NilssonView Answer on Stackoverflow
Solution 6 - PythonMichael KropatView Answer on Stackoverflow
Solution 7 - PythonDavid SykesView Answer on Stackoverflow
Solution 8 - PythonPro QView Answer on Stackoverflow
Solution 9 - PythonDouglas Myers-TurnbullView Answer on Stackoverflow
Solution 10 - PythonJohan KotlinskiView Answer on Stackoverflow
Solution 11 - Pythonuser1489833View Answer on Stackoverflow
Solution 12 - PythonpambdaView Answer on Stackoverflow
Solution 13 - PythonDavid HamnerView Answer on Stackoverflow
Solution 14 - PythonParagAbView Answer on Stackoverflow