11/30/2015

Housing Prices Attract People to Smaller Cities

Anytime I see a news story headline with something like “Top 10 Worst…” or “Top 10 Best…”  I have to read it.  I especially am interested in rankings of cities.  In a recent article publised in USA TODAY, the city I live in was ranked number 1 in the country for the best city to live in.  This was a very cool to see, but when I saw a related article about the 20 worst cities I became very curious. 

In a sample size of cities with populations between 25,000 and 100,000 there were four dimensions that contributed to an overall score.  Those key dimensions are affordability, economic health, education & health, and quality of life.  One thing that really stuck out to me, other than the worst 20 cities were all from the same state, was that three of the four experts brought in for commentary mentioned that the number one attraction to a city in the sample size was housing affordability.  Further, one expert even mentioned that when you try and compete with a specific class new to that community, like the creative class, you don’t make an economic profit.  There are accounting profits associated with expanded offerings but when cities are smaller there is no net positive economic profit.


With explicit cost measurements, like housing, it is easy to benchmark cities against each other.  The implicit costs are harder to measure but tell a more powerful story.  In a smaller community there is more value placed upon commute times but there is difficulty in assessing what the value really is.  Further, is there more value placed on a short commute time in a city that gets a lot of snow?  Probably.  Since there are so many variables in evaluating implicit costs they are difficult to really measure in a macroeconomy.  One thing for sure is explicit cost driven studies are hard to argue with.

Side note:  If you look at the full listing, Cedar City is listed as number 650

5 comments:

Dr. Tufte said...

LightningMcQueen: 100/100

Good for you! For most of the 80's and 90's, the town where I grew up was rated as the safest place in America with over 100K people. It made me feel good, even though I'd moved away.

But ... oooh. I hate to rain on your parade, but statistically, almost all of those rankings of cites are nonsense. So LightningMcQueen's sense here that there's something not quite right about these is correct.

Personally, I too find them fascinating, and I still look.

In short, the reason is that they use 2 or more pieces of data to construct their ranking, and so some weight has to be chosen for each of them. You might think it would be no problem to figure out the "best" weights. Not so.

Professionally, I have to tell you what I learned. Research-oriented universities have weekly seminars where they bring in big name people from other schools to speak. My first job was at the University of Alabama, and one of the first seminar guests wrote the seminal piece on these sort of "pop ratings". At the time, Places Rated Almanac was a big deal, as was the U.S. News and World Report ranking of colleges. It turns out there's a statistical condition for determining "best" uniquely that requires the number of categories used in your ranking to exceed the number of items ranked. Basically, you can always rank, say, 20 colleges, by weighting them with, say, 10 pieces of data. But you can't determine a ranking that is uniquely best if you don't have at least 21 pieces of data on each one. In practice, what this means is that any old person can choose to weight those pieces of data any way they like, to produce any ranking they like. This is great if you're a producer of lists of rankings: you can say just about anything you like, and at worst, in fine print make the disclaimer that this was produced from your personal weights, and really isn't best in any serious sense. (I may have just ruined the day for some of you). This is why there are always a ton of those rankings, and they're all different: no one is really that interested in producing the best ranking, because if they did there'd never be any interest in doing it again (or a buyer to pay for it again).

Now, do note that all of this doesn't mean that you can't rank things, or that you shouldn't. Instead, what it means is that you will have to choose a set of weights that's right for you. If you do, you'll produce a correct ranking for you. But, you shouldn't make the case that your ranking is the best one, or even appropriate for other people, because someone can always come up with their own weights (unless there's that huge amount of data out there, that can be used to pick the one best set of weights).

SpencerM said...

I always take a second to read these rankings when they appear. I have always wondered why over the years it seems like there is no consistency with the top ranked cities. After taking a few economics classes I have noticed, as Dr. Tufte said, that chosen variables and how they are weighted is crucial to the outcome. These variables can be chosen and weighted in all sorts of different ways creating different outcomes.

In another one of my economics classes we do a lot of regression analysis to compare different variables. It is interesting to run these regressions and look at what is called, Adjusted R Squared. The Adjusted R Squared shows the percentage of variation that is actually explained by the variables. Sometimes you can create a model where it looks like your hypothesis is right on, however it is important to look at the Adjusted R Squared to see if you have actually come close to explaining the outcome or not.

I agree that implicit costs are almost impossible to measure. Explicit costs on the other hand are always interesting to see, which is why I still read these rankings every now and then. It would be cool to have a website where you could always access explicit cost data for all the cities in the United States, without necessarily trying to rank them.

Dr. Tufte said...

SpencerM: 50/50

1) If you take more of my classes, I'll show you that you should look at the regression's overall F-statistic rather than adjusted R-squared. It gets you all the same information, plus some more.

2) This isn't my area, so I don't know, but I'd be certain that there are a ton of sites that actually keep track of that city data.

Unknown said...

I too love reading the type of "pop ratings" that LightningMcQueen brings to our attention! I reviewed the list of cities and found my current residence of Saint George, Utah sits at #34. Exciting, isn't it? Based upon the fact that housing affordability is the most important factor for those who are considering a specific community to reside, then my home town of Detroit, Michigan should be a sure win! Wait...what about the crime, under-performing education system, lack of jobs, and lousy weather? I guess it won't make this list!

There is little doubt, with some creativity, any city can make a "Top 100 List". Perhaps, I am a skeptical and pragmatic bore. However, these types of lists are built to increase advertising dollars. I can identify at least 10 different "best places to live" lists in 2015, that was produced by different publications and/or websites, all surrounded by advertising. Each of these organizations using slightly different metrics to produce slightly different lists.

On the flip-side, like most other, I can't help but stopping and reading these "pop ratings" in hopes to find some connection between me, the subject matter, and being tied to one of "the best"! Thank you for your post and I wish you all the success you deserve in the future!

Dave Tufte said...

Anthony Graham: 44/50 You wrote "... that was produced ..." which should have been "... that were produced ..." . by different publications and/or websites, all surrounded by advertising. Then you wrote "... like most otherS ...".

This is all good. What's interesting is the assertion that these pop lists are there to sell advertising.

Why is it that we like to consume these lists, but not invest enough interest to get them done correctly? I mean, you just know that if anyone offered specifics about why their list is better, everyone's eyes would glaze over and they'd flip to the next thing. Why is that? It's almost like these pop lists analogous to alternative medicine, in that actual support and documentation seems to be anathema to their users.