For a reminder, the readings for this week are:

This weeks lecture notes we will be going over the use of Census data and its limitations in calculating crime rates. We then discuss the process of proportional areal allocation – which takes data from one set of spatial units and estimates them for another set of spatial units. Also I will be going over some more about choropleth maps this week, in particular how to choose the bins for choropleth maps, and alternatives to choropleth maps.

The Use of Rates

A fundamental problem in predicting crime is that more crime happens where more people are located. Think routine activities theory – it is necessary for an offender and a victim to meet together in space and time for an offense to occur. Places with very few people will have very few meetings.

Data from the Census gives us counts of the residential population (plus various other demographic and socio-economic statistics). For some large areas this can be an alright metric to estimate the prevalence of meetings between offenders and victims, but it can be misleading. For example, if you count up places that have the most reported crime in a city, frequently a shopping mall will make the list. A shopping mall (and business districts) may have zero or very little residential population though.

We typically interpret higher crime rates (with the denominator as the residential population) to be more dangerous. But shopping malls are not per se a dangerous place, it just happens to have many people interacting.

It has been a long standing problem that individuals consider alternative baselines instead of the residential population. Boggs (1965) considers the use of different baseline denominators, such as using all potential pairwise comparisons between individuals for homicide (e.g. if you had 10 people, you could have a potential of 10*9/2=45 unique interactions), and the amount of parking space for theft of vehicles. Others include estimates adjusted for the working population (Stults and Hasbrouck 2015) or similarly estimates of the 24 hour ambient population using city lights (Andresen 2011).

But in the end these are all estimates. Almost all crime risk is very dynamic, for one area the baseline risk changes throughout the day (Felson and Poulsen 2003). It is near impossible to get a reasonable estimate of such varying risk, especially for small areas that most police departments are interested in predicting crime for. But that said there is a need to normalize crime counts between different areas to at least calculate risk in many different circumstances. I rather people use some well justified denominator and understand its limitations rather than use no denominator at all because none are perfect.

Census Data Overview

Because the use of Census data is so prevalent in academic research in criminology and criminal justice, as well as its use as a baseline, I think it is important you understand what the census measures and how it disseminates data. If you ever read research on neighborhood effects (e.g. neighborhoods with more poverty have more crime) in the US it is almost invariable using data from the census - including demographic estimates and spatial boundaries for neighborhoods. We will be working with census areas of block groups and tracts frequently in the class, so you need to know the difference between them.

The Census was originally designated by congress to tabulate the US population once every 10 years, starting in 1790. It subsequently expanded to collect a variety of demographic data from individuals over the years. The Census still tries to get an exact count of the population every ten years, but most of the demographic data it estimates by using samples.

The product we will be using information from is the American Community Survey. It used to be that every decennial census there would be a short form that all adults needed to fill out (which basic information such as sex, race, and age). Then there was a long form that had more extensive questions – from this long form is where we estimated things like poverty and income for larger areas.

Recently though the Census decided to not do the long form, but instead have the more extensive survey continuously updated instead of waiting every 10 years. This is the American Community Survey.

So instead of using potentially 10 year old data, we now have access to the American Community Survey updated on a more regular basis. The American Community Survey (ACS from here on out) releases three products, a 1 year estimate, a 3 year estimate, and a 5 year estimate. For the small geographies we are working with, they only release a 5 year estimate. This is because it takes many samples to get an accurate estimate for the small areas, so they need to conduct surveys for along time. The 5 year estimates are based on surveys in the current and prior four years. So the 2012 5 year estimates include surveys from 2008 through 2012.

The Census Bureau mostly does not disseminate individual data1 – most often we are working with aggregate counts or statistics per some area. The ACS disseminates data at the typical state and county level, but also for geographic areas that are defined solely by the Census. These areas are census tracts and census block groups. The diagram below shows a hierarchy of those census data units for the ACS.

Note there is one smaller geographic unit, the census block, which the decennial census does provides population counts for. But the ACS does not provide estimates at these small of units.

Census tracts and block groups are often used to approximate neighborhood boundaries. The census intentionally tries to create tracts of around 8,000 people that are demographically similar and obey particular natural landmarks. (So they try to make tracts not cross rivers or train-tracks or large highways.) When tracts contain too many people, the Census bureau will split it up, so the tract boundaries can change between each decennial census.

Block groups do not have as fixed a definition of how they are created – they are just always small as or smaller than a block group. (Often in very dense cities a tract may only have one block group within it.) They frequently contain around 500 to 3,000 people though in my experience.

Neighborhood boundaries are mostly arbitrary. I think the Census does a very good job of selecting areas that are demographically homogeneous. However they rarely follow local conventions for where neighborhood boundaries are. Tracts and block groups are frequently smaller than residential neighborhoods, but may be too large for some commercial neighborhoods. For example, sometimes a single street is its own neighborhood, like a busy strip mall. Census areas are never just one street though. Because the census frequently uses streets as its boundaries, people living across the street from one another can be in different tracts or block groups. It is a hard problem though, and I don’t think I could beat how good a job the census does at it.

Proportional Areal Allocation

Sometimes we have estimates from census areas, but we really want estimates for different geographic units of analysis. A frequent example for crime analysts are estimating population characteristics for idiosyncratic police areas, like patrol zones. One way to create such estimates is proportional areal allocation, a type of dasymetric mapping procedure (Poulsen and Kennedy 2004).

The idea behind dasymetric mapping is that we have some aggregate number, like residential population per square mile (i.e. population density), but we can improve upon this estimate by noting that some areas cannot have any residential population, like a lake.

So for example, say we have a county that has an area of 10^2 miles – 100 square miles – and a population of 1 million people. The density would then be 10,000 people per square mile. Lets say though that there is a river that takes up 20 square miles though, so there is actually only 80 square miles of land in that county. We can assume that there are no people living in the river, and so the density of the remaining land is 12,500 people per square mile.

When we then make our map, such as giving it a color in a choropleth map, the map basically assumes that the population density is equal throughout that remaining area. This is the equal allocation assumption. This assumption is clearly nonsense for large areas, but absent other information it may be the best guess we have, and for smaller areas it may not be so egregiously wrong to be misleading.

Using this same assumption we can interpolate data from one set of spatial units – such as census tracts, to a set of different spatial units, such as police zones. Here is a walkthrough of how one would do that.

In the image below, pretend the rectangular areas of X, Y and Z are areas that we have counts of the population. Then pretend that the circle is the area we want an estimate of the population after our procedure. (A realistic example is for a fire-department they may want to know the total population from the fire station within some distance.)

First what we do is take the intersection of the two polygon layers. The intersection of two polygon layers are the unique areas created by their overlapping boundaries. (This is easier to show than it is to explain in words.)

Now lets say we know the total population in area Z is 5,000, and its area is 100,000 square meters. Lets say that the smaller Z area is then 12,000 square meters - so 12000/100000 = 0.12 (twelve percent). Given our equal allocation assumption, we then multiple the total population of 5,000 by 12%, which gives us 600 people.

Follow the same procedure for the Y and the X components, and then aggregate the statistics back up to the circle of interest.

This is the approach we will be using in our tutorial for this week.

Another common approach is to interpolate the data to some very small unit - such as individual houses or tax parcels, and then you have more flexibility to aggregate up to varying larger areas. A good example of this is in Maantay, Maroko, and Herrmann (2007), which had the motivation of seeing the number of individuals at risk if there were flooding in New York City.

In this example, each of our larger areas has 12 smaller units. So each of the units would receive 1/12th of the total for each of the larger units. Using the same 5,000 estimate for our Z area, we would then have 5000*1/12 ~ 417 people for each of the subsets.

Then we can either map our individual subsets

Or we can aggregate our estimates back up to our circle of interest.

With the smaller units you may just make the simplifying assumption that if it touches the circle it should be counted.

Making Bins in Choropleth Maps

There are still a few more things I want to say about choropleth maps. The first is how we choose the bins in a choropleth map. The bins are arbitrary, but can have a big impact on how the map looks. Here is an example from Slocum (2005).

Note, popular notation for writing out bins is [low,high), where “[” in mathematical notation means closed and “)” means open. So if we have [0,100) colored as black in our choropleth map, that would mean black areas could be equal to zero, or could be greater than zero but less than 100 (so 100 would not be in that bin). You can of course switch which end of the bin is open or closed.

Here are some popular ways to choose the bins.

Natural breaks using a clustering procedure to assign the bins. This can be useful if you have data that have large breaks. For example say you had data that looked like, 1,2,3,10,11,12,19,20. If you used Natural breaks on this data and choose 3 bins, you would have bins of [1-3],[10-12],[19-20].

ArcGIS defaults to using Jenk’s Natural Breaks. Note Jenk’s original procedure was equivalent to the clustering procedure called k-mediods in one dimension. The ArcGIS default though is equivalent to the k-means procedure. Natural breaks does not tell you have many bins to make the data into, you must choose that as well. Also it is very rare for data to have natural breaks like in my example, so it can potentially be a very bad default.

Unfortunately there is no easy solution as to what bins you should choose. When making comparisons between many maps I often use quantiles if I am making several different maps (Brewer and Pickle 2002). Quantiles have the nice property that they are balanced. With other schemes you may have a bin that is empty or only has a few areas, quantile maps force all of the areas to be equal.

My most common advice is to simply not be hyperbole in the map. If you have some external standard which to identify outlying high values, reserve those for the most intense color on the map. It often takes local knowledge though of what is being mapped to choose a reasonable set of choropleth bins.

Alternatives to Choropleth Maps

I wanted to also go over some alternatives to choropleth maps. Here is an example of last weeks homework.

A simple alternative though instead of color to encode values is to use varying sized circles. Here is the same map using that technique.

This is nice when we have very small areas that can barely be seen in the map - they still can get a big circle though. A problem though is that the circles often overlap. Here is an example from Schmid (1960) - back when they did not have a computer to help make maps!

Another alternative to a choropleth map are dot density maps. Instead of colors, one places a dot in a semi-random position within the area. When the dots get very dense, you can see patterns, and the lines in choropleth maps are not so jarring. Here is our example burglary map (here is just the counts of burglaries, the rate does not make much sense for this type of map).

This tends to be problematic for very small areas, as the random placement can end up in a location not appropriate and can be misleading. It may make you think an event happened at that actual location, as opposed to just being placed randomly for visualization purposes. But zoomed out and for large areas they can look quite nice.

Shaw and McKay (1970) use this technique to map the residential addresses of juveniles and here tuberculosis cases. (See below.)

This technique has been used very effectively to visualize demographic segregation by giving different races different colors for the dots. See some examples by Eric Fisher.

Often when we place something as a point on a map, it actually refers to an area. This has caused some problems with online crime mapping, in which people are confused when a crime is assigned to the middle of the street.

The last alternative is a cartogram, which I introduced last week. There are many different types of cartograms - here is one example called a Dorling cartogram, and this is an example from the originator of the idea, Danny Dorling (Dorling 2012).

It uses circles and colors to visualize different data, but places the circles so they do not overlap, like in my original example. This loses the ability to easily tell where the original areas are, but tries to preserve where points are nearby each other.

A benefit of using such a map is that you can use glyphs to encode more information than just one number. For example, you could have a T like glyph, where the horizontal cross bar is longer for those in poverty, and the vertical bar is longer for areas with a higher burglary rate.

Such glyph maps tend to be hard to read though, especially when the little graphs are not aligned using a common axis. Here is an example taken from H. Wickham et al. (2012). It is a simple line graph depicting hypothetical data over time across the US.

But here is a similar map, but has locations not aligned so nicely. It becomes much harder to visualize regular patterns.

Most of our areas are not in convenient rows, so these can be harder to understand than just plotting the colors in choropleth maps (or making several small multiple choropleth maps).

Homework and for Next Week

You will have two homework assignments for this week. One is learning how to download Census data and export it to a spreadsheet. The second is to do proportional areal allocation. As a warning, the areal allocation homework is one of the harder assignments in the course. You need to really pay attention and follow the instructions exactly for it to work out. Remember you can use the question forum if you are stuck.

The readings for next week are:

For the CrimeStat reference manual do not worry too much about the math. I am more interested in you understanding the different examples and case studies of where you use particular point pattern techniques.

References and Endnotes

Andresen, Martin A. 2011. “The Ambient Population and Crime Analysis.” The Professional Geographer 63 (2). Taylor & Francis: 193–212.

Boggs, Sarah L. 1965. “Urban Crime Patterns.” American Sociological Review 30 (6). JSTOR: 899–908.

Brewer, Cynthia A, and Linda Pickle. 2002. “Evaluation of Methods for Classifying Epidemiological Data on Choropleth Maps in Series.” Annals of the Association of American Geographers 92 (4). Wiley Online Library: 662–81.

Dorling, Danny. 2012. The Visualisation of Spatial Social Structure. John Wiley & Sons.

Felson, Marcus, and Erika Poulsen. 2003. “Simple Indicators of Crime by Time of Day.” International Journal of Forecasting 19 (4). Elsevier: 595–601.

Maantay, Juliana Astrud, Andrew R Maroko, and Christopher Herrmann. 2007. “Mapping Population Distribution in the Urban Environment: The Cadastral-Based Expert Dasymetric System (Ceds).” Cartography and Geographic Information Science 34 (2). Taylor & Francis: 77–102.

Poulsen, Erika, and Leslie W Kennedy. 2004. “Using Dasymetric Mapping for Spatially Aggregated Crime Data.” Journal of Quantitative Criminology 20 (3). Springer: 243–62.

Schmid, Calvin F. 1960. “Urban Crime Areas: Part Ii.” American Sociological Review 25 (5). JSTOR: 655–78.

Shaw, Clifford R, and Henry D McKay. 1970. “Juvenile Delinquency in Urban Areas.(Revised Edition).” JSTOR.

Slocum, T.A. 2005. Thematic Cartography and Geographic Visualization. Geographic Information Science. Pearson/Prentice Hall. https://books.google.com/books?id=2uQYAQAAMAAJ.

Stults, Brian J, and Matthew Hasbrouck. 2015. “The Effect of Commuting on City-Level Crime Rates.” Journal of Quantitative Criminology 31 (2). Springer: 331–50.

Wickham, Hadley, Heike Hofmann, Charlotte Wickham, and Dianne Cook. 2012. “Glyph-Maps for Visually Exploring Temporal Patterns in Climate Data and Models.” Environmetrics 23 (5). Wiley Online Library: 382–93.

  1. See IPUMS or the Current Population Survey for some exceptions.