Reading in a file in memory with read_to_string might not be that clever with large files, because that would use a large chunk of memory. In that case, it is much better to use the buffered reading provided by BufReader and BufRead from std::io:; this way lines are read in and processed one by one.
In the following example, a file with numerical information is processed. Each line contains two fields, an integer, and a float, separated by a space:
// code from Chapter 11/code/reading_text_file.rs: use std::io::{BufRead, BufReader}; use std::fs::File; fn main() { let file =
BufReader::new(File::open("numbers.txt").unwrap());
let pairs: Vec<_> = file.lines().map(|line| { let line = line.unwrap(); let line = line.trim(); let mut words = line.split(" ");
let left = words.next().expect("Unexpected empty line!");
let right = words.next().expect("Expected number!");
(
left.parse::<u64>().ok().expect("Expected integer in first column!"),
right.parse::<f64>().ok().expect("Expected float in second column!") )
}).collect(); println!("{:?}", pairs); }
This prints out:
[(120, 345.56), (125, 341.56)]
The information is collected in pairs, which is a Vec<(u64, f64)>, and which can then be processed as you want.
For a more complex line structure, you would want to work with struct values describing the line content instead of pairs, like this:
struct LineData { string1: String, int1 : i32, string2: String, // Some other fields }
In production code, the unwrap() and expect() functions should be replaced by more robust code using pattern matching and/or try!.