- Geospatial Data Science Quick Start Guide
- Abdishakur Hassan Jayakrishnan Vijayaraghavan
- 296字
- 2025-04-04 14:14:09
Handling missing values
A machine learning algorithm such as random forest can handle a few missing values very well, and in some cases we can adopt strategies such as imputing or removing rows with missing values. But if the proportion of missing values in a column is pretty high, we might need to remove entire columns. The following lines of code help us determine the percentage of missing values in each column of the data:
na_counts = pd.DataFrame(df.isna().sum()/len(df))
na_counts.columns = ["null_row_pct"]
na_counts[na_counts.null_row_pct > 0].sort_values(by = "null_row_pct", ascending=False)
The resulting DataFrame looks as follows:

At first glance, we might be inclined to remove all rows that have missing latitude or longitude values for pickup and dropoff, since we identified that this is the major feature we will be building our model upon. But when taking closer look, we can see that the percentage of missing values for the PULocationID or DOLocationID columns and Pickup_longitude/Pickup_latitude and Dropoff_longitude/Dropoff_latitude are exact complements of each other. This means that the sum of the percentage values of entities; taking one from each group is exactly 100%. As a corollary, we can infer that for each missing value in pickup or dropoff coordinates, there is a non-missing value in the corresponding rows for PULocationID or DOLocationID.
But what are these location IDs? These location IDs are the taxi zone IDs that are assigned to different locations in New York. Though these locations are areal features, we can calculate the centroid of these locations and substitute these for the pickup and dropoff location coordinates. But when both the location ID and coordinates are missing, we need to remove those rows. The following lines of code will accomplish this:
df = df[~(
(df.Dropoff_latitude.isna()) & (df.DOLocationID.isna())
)]