Python, data work, and O’Reilly books

I own many O’Reilly books about code. I’m kind of mad that they quit selling PDFs, because I loved those PDFs for searchability, and the Kindle editions are nowhere near as good (they have layout issues that don’t occur in PDFs).

Recently, though, I bought a hardcopy of Python Data Science Handbook, and this inspired me to examine my O’Reilly Python library.

First, a bit about Python Data Science Handbook: It’s a large book, 530 pages, but it has only five chapters:

  1. “iPython: Beyond Normal Python” (all the stuff you can do with the iPython shell, which is different from Jupyter Notebooks)
  2. Intro to NumPy
  3. Pandas
  4. Matplotlib
  5. Machine learning

That list is exactly why I bought this book, even though I already owned others. (See the whole book online.) I especially want to learn more about using Matplotlib in a Jupyter Notebook.

After reading chapters 1 and 2, I went into my older O’Reilly PDFs to see what other Python books I have in that collection. I opened Data Wrangling with Python and ended up spending more time in it than I’d expected, because — surprise! — not only is it completely different from Python Data Science Handbook; it is all about the kinds of things journalists use Python for the most: web scraping, document management, data cleaning. I don’t know why I’ve never spent more time with that book! (See the table of contents.) The first two chapters explain the Python language well for beginners, and then it goes on to data types (CSV, JSON, XML) that you need to know about when dealing with data provided by government agencies and the like. There’s a whole chapter on working with PDFs.

The only downside to Data Wrangling with Python is that the examples and code are Python 2.7. I understand why the authors made that choice in 2015, but now it’s a detriment, as those old 2.7 libraries are no longer being maintained. You can still learn a ton from this book, and if you’re a bit experienced with Python and the differences between 2.x and 3.x, it should be easy to work around any issues caused by the 2.7 code.

One other criticism I’d offer about Wrangling is that the chapter “Data Exploration and Analysis” uses agate, a Python library designed for journalists, but I think Pandas (another Python library) would be a better choice.

I’ve been teaching web scraping with Python to journalism students for four years now, and I’ve used a different O’Reilly book, Web Scraping with Python, by Ryan Mitchell, since the beginning. An updated second edition of Mitchell’s book came out last year, updating from 2.x to 3.x, which is good. (See the table of contents.) However, after yesterday’s time spent with Data Wrangling with Python, I wish I were using that book instead. The 2.x issue will prevent changing, though, because my students are beginners and we use Python 3.x. I like a lot of things about Mitchell’s book, but it’s a bit of a tough slog for Python beginners.

I have several other Python books (including some not from O’Reilly), but as I’m focused here on dealing with data issues (analysis and charts as well as scraping and documents), there’s only one other book I’d like to include in this post. It’s actually not a Python book, but it is from O’Reilly: Doing Data Science, by Schutt and O’Neil. (See the table of contents.) It’s older (published in 2013), but I think it holds up as an introduction to data analysis, algorithms, etc. It even has a chapter titled “Social Networks and Data Journalism.” Charts are in color, which I like very much. There’s not a lot of code in the book — it’s not about showing us how to write the code — and examples are in several languages, including Python, R, and Go.

All four books referenced here are distinctly different from one another. Although there is some overlap, it’s minimal.

Scraping details

I’ve been scraping websites with BeautifulSoup for several years, but not always using the Requests library.

Old way:

from urllib.request import urlopen
from bs4 import BeautifulSoup
url = ""
html = urlopen(url)
soup = BeautifulSoup(html, "html.parser")

New way:

import requests
from bs4 import BeautifulSoup
url = ""
html = requests.get(url)
soup = BeautifulSoup(html.text, 'html.parser')

So they are really similar, but it turns out that the Requests library offers us two choices for html.text — instead, we could use html.content — so what’s the diff, and does it matter?

As usual, it’s Stack Overflow to the rescue. html.text will be the normal, usual choice. It gives us the content of the HTTP response in unicode, which will suit probably 99.9 percent of all requests. html.content would give us the content of the HTTP response in bytes — meaning raw. We would choose that for a non-HTML file, such as a PDF or an image.