December 2021

America’s Best and Worst Metro Areas for School Quality

Metro skyline photo

Introduction

In the pre-pandemic Before Times, one of the most consistent and pernicious long-term trends was the growing domination of so-called “Superstar Cities.” As Richard Florida explained it, a winner-take-all dynamic was leading a handful of metropolitan areas to dominate the knowledge economy. San Francisco and Silicon Valley; Washington, D.C.; New York City; Boston; and a handful of other metros were raking in jobs, venture capital, highly educated workers, and the various benefits (and occasional challenges) that come with those attributes. Other metros, not to mention America’s small towns and rural communities, were increasingly being left behind.

The contest to choose a new Amazon headquarters was symbolic of this frenzy, with hundreds of cities nationwide begging the retailing giant to pick them, complete with promises of tax breaks and other incentives, just to watch Jeff Bezos choose the D.C. metro area and New York City—winners taking all, yet again (New York later backed out, under pressure from AOC and other progressives, and Nashville got something of a consolation prize).

Though there was plenty of criticism for Bezos et al.—for giving hope to the underdog cities and for making them cough up data that Amazon could exploit for other purposes—it was hard to fault their final decision. As the Brookings Institution’s Alan Berube wrote, it was all about “talent, talent, talent.”

There’s little doubt that the New York Cities, Bostons, D.C.’s, and Silicon Valleys, are very effective at attracting highly educated workers. In a virtuous cycle, these workers migrate to where the interesting, highly paid jobs are, where other smart young people live, and where the cultural amenities make their nonworking life more fun. But workers are also attracted by the promise that their own children, when they are ready to start families, will get to enjoy high-quality public schools in their new home towns, as well.

But is that actually true? Do the Superstar Cities boast better public schools than those in the Rust Belt or the Sun Belt? Business leaders often say that the quality of local schools is a key consideration when they are choosing new locations for corporate headquarters or a manufacturing facility. But are they looking at the right data when judging school quality?

Label us skeptics

Longtime followers of the Thomas B. Fordham Institute won’t be surprised to learn that we are skeptical about how school success tends to be measured. It’s not the fault of the business leaders—or even the economic development folks who try to woo them. It’s simply a fact of data availability: until recently, very few data sources existed that allowed nationwide comparisons of schools below the level of the state. And those sources that do exist, such as average SAT scores or graduation rates, are terrible at helping us understand the quality of the schools. That’s because such “status measures”—snapshots of performance—are so closely related to the demographic makeup of individual schools and districts. It’s why, for so long, communities have boasted about “great public schools”—which are in reality defined as “schools populated by the children of highly educated parents.”

But that’s a lousy definition of school quality, because it doesn’t consider whether schools are actually effective at helping students learn. It allows the schools in upper-middle-class suburbs to rest on their laurels, while hiding the amazing work done at many high-poverty schools, whose students start out way behind but make remarkable progress year-to-year—even if they never quite catch up to the more advantaged kids.

So those of us at Fordham wondered: If we could measure school effectiveness the right way, by looking at student progress over time, would a different picture of school quality emerge?1 And, in particular, could we start to determine at the metro-area level which American regions really had the best schools? Might that encourage business leaders to give some metros another look when making location decisions—especially in the post-pandemic world, now that so many workers are rethinking their commitment to the Superstar Cities, with their sky-high housing prices, soul-grinding traffic, and distance from family?

To be sure, there are other motivations for examining metros, as well. First, “metropolitan America” is home to roughly 80 percent of the population; it’s simply where most Americans live now. Second, because metro areas are distinct economic units, they are ideal for studying how economic growth, labor-market trends, and similar phenomena might impact K–12 education. And third, looking at metros opens new avenues for study. For instance, examining the impact of segregation on achievement doesn’t go very far if one looks only within districts, given that much segregation happens between districts, especially in metro areas with lots of small districts.

But what about the data availability problem? Thankfully, that one has been solved, thanks to the impressive Stanford Education Data Archive (SEDA). Leveraging state summative assessments and adjusting them for states’ performance on NAEP, the analysts at SEDA have enabled previously impossible comparisons of schools, districts, and cities nationwide. Now, for the first time, we are using these data to build valid comparisons at the metro-area level for schools across the United States. Although workforce data or other social outcomes (such as unemployment, earnings, or criminal-justice data) may reflect the impacts of the local education system, we’ve included the measures that are most clearly connected to school effectiveness and that are available nationally at the school district level, including the following:

  • academic growth
  • academic growth for traditionally disadvantaged students
  • improvement in achievement in recent years
  • high school graduation rates

Introducing the SLAM rankings

Because we have a penchant for catchy acronyms, we’re calling these areas Student Learning Accelerating Metros or SLAM. What’s different about the SLAM rankings is that they correlate far less with family wealth and student demographics than do pure academic achievement ratings. They are heavily weighted toward student progress because schools have more control over how much students grow academically in the K-12 school years than they do over, say, the percentage of residents with a doctorate degree (which other rankings use). Still, we report individual rankings for the other three metrics and for average academic achievement, which is not part of the rankings formula but is still of interest (see Defining quality for more).

We understand that educational effectiveness is an inherently contested concept, so we invite users to experiment with the data and build their own rankings using our interactive data tools. Users can view rankings for all the included variables and see how metros perform on specific subjects, for different grade levels, and for various populations of students. They can also see what the relevant metrics look like when they are adjusted (or not) for student demographics.

We are grateful that our friends and colleagues at the U.S. Chamber of Commerce Foundation agreed to join us in this pursuit. After all, the Chamber’s affiliates are key players in economic development decisions, active in recruiting employers to their regions. They are integrally involved in local education issues, too, as major funders and consumers of the education system. Readers may also remember that the Chamber has a long history of educational rankings, going back to its excellent Leaders and Laggards series. We appreciate the Chamber’s support and involvement.

The overall rankings

The overall rankings show that the top five Student Learning Accelerating Metros (SLAM) are Miami, Memphis, McAllen (TX), Atlanta, and Indianapolis. These metros outperform the average on multiple measures, especially academic growth, which is our most heavily weighted metric.

Adjust the sliders to the right to re-rank the metros by weighting these academic outcomes differently. You can also see results for the fifty medium-sized metros and turn off demographic controls. To get more information about a metro, click its name in the rankings or its area on the map. The largest districts in each metro can also be selected to view their performance.

    Metro
    Select a metro from the list to the left or by clicking any of the shaded areas on the map.
    School District
    Select any shaded school district on the map.
    ?
    Metro areas
    Academic Growth: 60%
    ?
    Disadvantaged Growth: 20%
    ?
    Metro Progress: 10%
    ?
    H.S. Graduation Rate: 10%
    ?
    Academic Achievement: 0%
    ?

    What should be clear is that some Superstar Cities do better than others. The Miami metro area—South Florida really, including Miami-Dade, Broward, and Palm counties—comes out number one. The “capital of Latin America” has been on a tear as of late, increasingly attracting venture capital and corporate headquarters, especially in the finance industry. New arrivals will find excellent schools for their kids.

    The Boston metro area does quite well, too. To be sure, the Boston region as a whole is extremely affluent and well educated, which is not surprising given the presence of so many world-class universities, hospitals, and tech companies. But schools there aren’t resting on their laurels; students are making significant progress over time, as well, as are students in the Los Angeles, San Jose, and Chicago metro areas.

    But this is not the case for some of the other Superstar Cities. The San Francisco metro area looks pretty bad, once we measure school effectiveness the correct way. The Washington, D.C., metro area is not much better. Or look at Raleigh, in the Research Triangle, which is almost at the bottom of the list.

    Please don’t confuse metro areas with their central cities. For example, Washington, D.C., proper shines on our ranking, thanks to the rapidly improving D.C. Public Schools and its highly effective charter schools sector. The Northern Virginia suburbs of Fairfax and Loudoun County, on the other hand, are the ones dragging the DMV down. They are rich and highly educated, but the kids there don’t appear to be making much progress from year to year. Indeed, Amazon may have made a mistake in locating its headquarters in Northern Virginia instead of, say, Maryland’s Prince George’s County, which is less affluent and much more diverse but boasts schools that are helping kids make more progress, especially once we control for demographic factors.

    Meanwhile, there are indeed some large metro areas that deserve a fresh look, given the quality of their public schools, including Atlanta, Indianapolis, and Fresno. Memphis, Tennessee, and McAllen, Texas stand out in particular, with remarkably effective schools despite their Deep South and Border Town poverty—likewise, the smaller metros of Jackson, Mississippi, and Brownsville, Texas. There are many other smaller metros, to be sure, that deserve another look, including Sarasota, El Paso, Boise, and Grand Rapids (see Appendix).

    Rankings for specific outcomes and groups

    Below, users can select a specific metric and rank metros according to their performance on it for specific student groups, including economically disadvantaged students, Black students, and Hispanic students and for specific subjects (math, English language arts/reading, or both) and grade levels (elementary, middle school, or all grades) where applicable. Using the buttons to the right, re-rank metros according to their performance on any of the available outcomes, subjects, student groups, and grades.

      ?
      Metro areas
      Outcomes
      Subjects
      Student Subgroups
      Grades

      Defining quality

      This section describes the data and methodologies employed in assigning rankings to each large and midsize metro area in the U.S.

      Data

      This project uses two principal data sources covering public school students, including those who attend charter schools: the Stanford Education Data Archive (SEDA) version 4.0 and the Adjusted Cohort Graduation Rate data from the U.S. Department of Education’s EdFacts data collection.

      The SEDA data are derived from annual spring state assessments from 2009 to 2018 in grades 3–8 and re-normed based on state performance on NAEP so that they are comparable nationwide and over time.2 These data include academic achievement measures in math and English language arts (ELA) for most schools in the country by student subgroup and a variety of local data on students, schools, and economies that come from multiple official sources. This report utilizes SEDA data that are aggregated at the school-district level. We aggregate these values further to the core-based statistical-area level using crosswalk files available at the U.S. Census Bureau website.3 The school-district level is the lowest level at which data are available by subject, grade, and student group. Moreover, when we control for demographics, student-level data provide more accurate estimates than units aggregated at higher levels.4

      Although relying primarily on data from SEDA, our results are naturally somewhat different from those found on SEDA’s official Edopportunity.org website, as our data are aggregated at the metro level, are restricted to the most recent years of data (2016–18), and include some additional demographic controls, as well as additional metrics, such as growth for disadvantaged groups and high school graduation rate.

      Table 1 summarizes the share of the Student Learning Accelerating Metros (SLAM) rankings formula assigned to each component. In line with the notion that schools should be judged on factors that are most under their control, the rankings heavily weight overall student growth (60 percent), followed by student growth for historically disadvantaged students (20 percent); how much progress the metro has made overall in recent years (10 percent); and the graduation rate (10 percent), which is the only measure of high school performance available nationwide. All these measures are adjusted based on the demographics of the students at the district level.

      Table 1. SLAM rankings are largely comprised of student growth measures for grades 3–8.

      Factor Description Adjusted for demographics? Weight
      Cohort academic growth Improvement for cohorts of students in math and ELA in recent years Yes 60%
      Cohort academic growth for disadvantaged groups Improvement for cohorts of Black, Hispanic, and economically disadvantaged students in math and ELA in recent years, grades 3—8 Yes 20%
      Metro improvement Metro-level difference in average achievement from the earliest period of the data (2009-2011) to the most recent period (2016-2018), grades 3—8 Yes 10%
      High school graduation rate Adjusted cohort high school graduation rate Yes 10%
      Average academic achievement* Average of academic achievement level for math and ELA for all grades in recent years, grades 3—8 Not applicable 0%

      Note: Average academic achievement is not included as a factor in the SLAM rankings but is available for users to include in the interactive rankings above.

      Although we believe that prioritizing student growth is the fairest measure of performance, that decision does not necessarily reflect the values of all stakeholders. Thus, we include average academic achievement on our interactive website so that users can choose to include it in their own ranking metric. For more about the relationship between the SLAM rankings and average academic achievement, see Don’t the best schools have the highest achievement?

      Cohort academic growth (60%)

      Measures of student progress are essential to understanding the performance of schools, as the average level of achievement among a given group of students is shaped by demographic and economic factors outside the control of schools. Because longitudinal student-level data are not available across the nation’s metro areas, we create a measure of “cohort growth” in achievement to estimate the progress that cohorts of students are making from year to year.5

      Cohort growth approximates measures of student growth over time by tracking differences in average scores as cohorts of students progress through the grades. In other words, these measures rely on repeated aggregated, cross-sectional data.6 The measures of cohort growth used in the SLAM rankings include all the grades and years available in the most recent three years of data (2016–18) and one additional prior year (2015) as a baseline. For the main “composite” measures used in this report (including both ELA and math), we include all measures of both subjects from recent years.7

      Cohort academic growth for disadvantaged groups (20%)

      These measures mimic the overall cohort growth measures above but are weighted according to the proportion of traditionally disadvantaged students in each metro, including Black, Hispanic, and economically disadvantaged students.8 This means, for example, that metros with many Hispanic students and few Black students will be assigned rankings based more on the growth of Hispanic students than that of Black students.9

      Metro progress (10%)

      This component captures each metro’s progress (or lack thereof) in academic achievement over the past six to ten years.10 The measure of progress or improvement represents the difference in average achievement of all available grades in both ELA and math from the first three years of available data (2009–11) to the most recent three years of data (2016–18).11

      High school graduation rate (10%)

      We use the adjusted cohort graduation rate (2016–18) from the U.S. Department of Education’s EdFacts data. These data are calculated at the local education agency (LEA) level, while this analysis aggregates them up to the core-based statistical area level.12 LEAs with smaller numbers of students are flagged in the EdFacts data and excluded from the aggregated metro area measures.

      Average academic achievement (0%)

      The SLAM rankings do not assign weight to average academic achievement, but this metric is included in our interactive website for users to explore on their own. The measure represents the average of the grades and years available in the most recent three years of data (2016–18).13 For the “composite” measures, we include available data for both ELA and math from these same years.

      Adjusting for demographics

      Unless otherwise noted, outcomes are adjusted based on the demographics of the student populations as reported in SEDA. Predicted values of each outcome are assigned based on the effects of student demographic factors estimated by regression methods, and schools are then ranked based on the extent to which they outperform or underperform their predictions.14

      Missing data

      A few large metro areas do not have sufficient recent data from SEDA to enable us to compute rankings and are excluded. These metros include Albany-Schenectady-Troy (NY), Albuquerque (NM), Buffalo-Cheektowaga (NY), Rochester (NY), Seattle (WA), and Syracuse (NY). A table asterisk (*) indicates a case with important data limitations. In New York City and Portland (OR), the metro is missing data for its largest district (New York City Public Schools and Portland Public schools, respectively) for all recent observations.15 In Colorado Springs, Denver, and Greeley (CO), key data points are missing in the early years of the SEDA data. Because missing early data makes it impossible for us to calculate metro progress, the value for that variable is imputed to align with the metro’s performance on the other components of the SLAM rank. In Colorado Springs, Denver, and Greely (CO), key data points are missing in the early years of the SEDA data. Because missing early data makes it impossible for us to calculate metro progress, the value for that variable is imputed to align with the metro’s performance on the other components of the SLAM rank.

      Don’t the best schools have the highest achievement?

      Academic-growth measures capture student learning in ways that many believe are fairer to schools than measures of average academic achievement, which largely reflect the backgrounds of students. For these reasons, the Student Learning Accelerating Metros (SLAM) ranking formula does not include average overall achievement.

      Still, we might wonder whether metros with high average achievement also perform well on our growth-friendly SLAM rankings. Table 2 presents rankings of the largest metros based on unadjusted academic achievement.

      Table 2. Boston, New York City, and San Jose are the metros with the highest average academic achievement, while the inland California cities of Riverside and Fresno have the lowest.

      Metro name Raw
      achievement
      rank
      SLAM rank
      Boston 1 15
      New York 2 10
      San Jose 3 7
      Raleigh 4 48
      Pittsburgh 5 13
      Minneapolis 6 31
      Indianapolis 7 5
      Jacksonville 8 18
      Cincinnati 9 28
      Washington, D.C. 10 37
      Miami 11 1
      Hartford 12 40
      Richmond 13 45
      Charlotte 14 24
      Kansas City 15 23
      Virginia Beach 16 1
      Columbus 17 26
      Providence 18 38
      Orlando 19 20
      San Francisco 20 43
      Tampa 21 29
      Austin 22 32
      Nashville 23 36
      Chicago 24 8
      St.Louis 25 22
      Philadelphia 26 42
      Atlanta 27 4
      Cleveland 28 25
      Baltimore 29 47
      Denver 30 19
      San Diego 31 11
      Salt Lake City 32 46
      Milwaukee 33 41
      Portland 34 30
      Dallas 35 16
      Houston 36 17
      Phoenix 37 14
      Louisville 38 39
      Oklahoma City 39 34
      Honolulu 40 50
      McAllen, TX 41 3
      Sacramento 42 33
      San Antonio 43 27
      Los Angeles 44 6
      Detroit 45 35
      Las Vegas 46 49
      Birmingham, AL 47 44
      Memphis 48 2
      Fresno 49 9
      Riverside, CA 50 21
      Ranking comparison

      The metro area that leads the nation according to the SLAM rankings is Miami, which performs well on overall academic achievement as well, coming in eleventh place. So does having higher overall achievement mean a metro will automatically fare better on the SLAM rankings?

      Figure 1. There is little relationship between the overall SLAM rank and the average academic achievement rank.

      Note: A rank of one indicates the best performance, and a rank of fifty indicates the worst performance among the fifty largest metro areas. Raw achievement rankings are based on unadjusted average achievement. See Defining Effectiveness for more about the methodology. Each metro represents a core-based statistical area that includes nearby suburbs, towns, and other cities. Metro area names are abbreviated for legibility; for full names of the core-based statistical areas, see Appendix.

      To answer that question, consider Figure 1, a scatterplot comparing average academic achievement (unadjusted for student demographics) and the SLAM rankings. Statistically, these two measures exhibit virtually no correlation. Some metros such as Memphis, McAllen (TX), and Los Angeles have very low average achievement while earning high marks in the SLAM rankings based on their strong cohort growth. Other metros with high achievement, such as Raleigh and Washington, D.C., are ranked relatively poorly by our formula. Yet this pattern is not the norm. In fact, there are relatively high-achieving metros such as Miami, Indianapolis, and San Jose that the SLAM rankings place toward the top, along with lower-achieving metros including Birmingham, Las Vegas, and Honolulu that rank poorly.

      In other words, the level of average academic achievement is not a good predictor of how a metro area performs on the rankings: having high overall achievement does not mean a metro will earn a high rank, nor does it guarantee a low rank.

      Contrast that with Figure 2, which compares the percentage of students in the metro who are economically disadvantaged and the SLAM rankings. The correlation is powerful: Boston, the metro with the least economically disadvantaged student population, has the highest achievement, while economically disadvantaged metros such as Fresno and McAllen have some of the lowest levels of average achievement. Whereas the SLAM rankings focus on growth and control for student demographics in order to better isolate the performance of the local schools, a focus on overall achievement is likely to reflect the backgrounds of the student populations and other factors outside the schools’ control.

      Conclusion

      What can policymakers, educators, and business leaders do in metro areas that don’t look so good? First, dig into the data. Understand which individual school districts are dragging down the ratings and which are doing relatively well. Then, study the high achievers and figure out what they are doing—or really, were doing, pre-pandemic—that might be emulated. In our minds, it’s similar to the work of APQC, a non-profit consulting firm which has been engaged in gathering copious data to benchmark an organization’s performance against that of the industry’s top performers. Perhaps it’s time for business, education, and workforce leaders to engage in similar benchmarking.

      That’s probably an unsatisfying answer, but here’s where we must discuss the limitations of a data project like this one: We simply don’t know why the schools of certain metro areas are so much more effective than those of other regions. Our fervent hope is that scholars will use these tools to try to answer that question, as best they can. We can, however, certainly float some hypotheses, by perusing the list of winners and losers. For example, several metros with large, countywide school districts do quite well, including Miami, Memphis, and Atlanta. But there are counterexamples, too, including Raleigh and Las Vegas. Perhaps the prevalence of charter schools and school choice is an important factor; that surely is a big part of the story in Miami, Memphis, and Indianapolis, as well as the border towns along the Rio Grande. Could spending matter, adjusted for differences in costs of living? What about state policy? Florida and Texas boast many high flyers, for example—though also some laggards. California shows a stark north/south divide. Pennsylvania is largely lagging, though Pittsburgh does relatively well.

      While academics try to sort this out, local leaders need to get to work. The Covid crisis, related school shutdowns, and massive learning loss mean that all schools are more or less starting from scratch. As the pandemic winds down (or so we hope) and educators figure out how to spend the historic funds provided to them by the American Recovery Act and other federal relief bills, now is the time to accelerate progress. No doubt, student achievement is going to be depressed for many years, given that so many students were out of their traditional school environments for eighteen months or longer. And tragically, the impacts appear to have fallen most heavily on the students who already faced so many disadvantages, meaning that achievement gaps are widening as well. All the more reason to focus on what’s under schools’ control, which is how much learning they can pack into each academic year.

      Business leaders can hold school systems accountable by demanding and presenting transparent data on student progress, such as the Arizona Economic Dashboard and the State of Education report, presented by the Greater Phoenix Chamber Foundation and the Detroit Regional Chamber, respectively. They can highlight the leaders (and the laggards) in their own regions as schools get back to work. Business leaders can lend their expertise around ROI to help local districts make smart spending decisions, ones that invest in students rather than giveaways to the system. They can support convenings and capacity building that help districts implement evidence-based strategies known to accelerate student learning. They can help forge partnerships between high schools and industry leaders to strengthen career and technical education and connect teaching with workforce needs, such as the UpSkill initiative and the Externships for Educators and Administrators, sponsored by the Greater Houston Partnership and Dallas Regional Chamber, respectively. Business leaders can also leverage the Chamber’s Talent Pipeline Management program, where workforce leaders from across the country come together to develop solutions to support students, workers, and businesses in their community. And they might also get engaged in school board elections too—not just in the central city but also in large suburban districts. A big victory in Albuquerque this year shows what’s possible when business gets involved.


      Great public schools are essential to the health of any community. And in economic terms, they are a main driver of economic development. But we need to make sure we’re defining “great schools” the right way. Now, thanks to the new data provided by the Stanford Education Data Archive and the tools on this website, we can finally identify metro areas that have a right to brag about the quality of their school systems and charter schools—and ones that need to stop resting on their laurels. Now let’s get to work on helping all American regions put effective schools at the center of their future-ready strategies.

      Appendix

      This appendix includes the full metro names that are abbreviated throughout this report and all rankings for both large and medium-sized metros.

      Full metro names

      Short Metro Name Long Metro Name (Core-Based Statistical Area)
      Akron Akron, OH
      Allentown, PA Allentown-Bethlehem-Easton, PA-NJ
      Atlanta Atlanta–Sandy Springs–Alpharetta, GA
      Austin Austin–Round Rock–Georgetown, TX
      Bakersfield, CA Bakersfield, CA
      Baltimore Baltimore-Columbia-Towson, MD
      Baton Rouge Baton Rouge, LA
      Birmingham, AL Birmingham-Hoover, AL
      Boise City Boise City, ID
      Boston Boston-Cambridge-Newton, MA-NH
      Bridgeport, CT Bridgeport-Stamford-Norwalk, CT
      Brownsville, TX Brownsville-Harlingen, TX
      Cape Coral, FL Cape Coral–Fort Myers, FL
      Charleston, SC Charleston–North Charleston, SC
      Charlotte Charlotte-Concord-Gastonia, NC-SC
      Chicago Chicago-Naperville-Elgin, IL-IN-WI
      Cincinnati Cincinnati, OH-KY-IN
      Cleveland Cleveland-Elyria, OH
      Colorado Springs Colorado Springs, CO
      Columbia, SC Columbia, SC
      Columbus Columbus, OH
      Corpus Christi Corpus Christi, TX
      Dallas Dallas–Fort Worth–Arlington, TX
      Dayton Dayton-Kettering, OH
      Deltona-Daytona Deltona–Daytona Beach–Ormond Beach, FL
      Denver Denver-Aurora-Lakewood, CO
      Des Moines Des Moines–West Des Moines, IA
      Detroit Detroit-Warren-Dearborn, MI
      Durham–Chapel Hill Durham–Chapel Hill, NC
      El Paso El Paso, TX
      Fayetteville, AR Fayetteville–Springdale-Rogers, AR
      Fresno Fresno, CA
      Grand Rapids, MI Grand Rapids–Kentwood, MI
      Greeley, CO Greeley, CO
      Greensboro, NC Greensboro–High Point, NC
      Greenville, SC Greenville-Anderson, SC
      Harrisburg, PA Harrisburg-Carlisle, PA
      Hartford Hartford–East Hartford–Middletown, CT
      Honolulu Urban Honolulu, HI
      Houston Houston–The Woodlands–Sugar Land, TX
      Indianapolis Indianapolis-Carmel-Anderson, IN
      Jackson, MS Jackson, MS
      Jacksonville Jacksonville, FL
      Kansas City Kansas City, MO-KS
      Killeen, TX Killeen-Temple, TX
      Knoxville Knoxville, TN
      Lakeland, FL Lakeland–Winter Haven, FL
      Las Vegas Las Vegas–Henderson–Paradise, NV
      Little Rock Little Rock–North Little Rock–Conway, AR
      Los Angeles Los Angeles–Long Beach–Anaheim, CA
      Louisville Louisville/Jefferson County, KY-IN
      Madison, WI Madison, WI
      McAllen, TX McAllen-Edinburg-Mission, TX
      Memphis Memphis, TN-MS-AR
      Miami Miami–Fort Lauderdale–Pompano Beach, FL
      Milwaukee Milwaukee-Waukesha, WI
      Minneapolis Minneapolis–St. Paul–Bloomington, MN-WI
      Modesto, CA Modesto, CA
      Nashville Nashville-Davidson-Murfreesboro-Franklin, TN
      New Haven New Haven–Milford, CT
      New Orleans New Orleans–Metairie, LA
      New York New York–Newark–Jersey City, NY-NJ-PA
      North Port, FL North Port–Sarasota–Bradenton, FL
      Ogden, UT Ogden-Clearfield, UT
      Oklahoma City Oklahoma City, OK
      Omaha Omaha–Council Bluffs, NE-IA
      Orlando Orlando-Kissimmee-Sanford, FL
      Oxnard, CA Oxnard–Thousand Oaks–Ventura, CA
      Philadelphia Philadelphia-Camden-Wilmington, PA-NJ-DE-MD
      Phoenix Phoenix-Mesa-Chandler, AZ
      Pittsburgh Pittsburgh, PA
      Portland Portland-Vancouver-Hillsboro, OR-WA
      Portland, ME Portland–South Portland, ME
      Providence Providence-Warwick, RI-MA
      Provo, UT Provo-Orem, UT
      Raleigh Raleigh-Cary, NC
      Richmond Richmond, VA
      Riverside, CA Riverside–San Bernardino–Ontario, CA
      Sacramento Sacramento-Roseville-Folsom, CA
      Salinas, CA Salinas, CA
      Salt Lake City Salt Lake City, UT
      San Antonio San Antonio–New Braunfels, TX
      San Diego San Diego–Chula Vista–Carlsbad, CA
      San Francisco San Francisco-Oakland-Berkeley, CA
      San Jose San Jose-Sunnyvale-Santa Clara, CA
      Scranton Scranton-Wilkes-Barre, PA
      Shreveport, LA Shreveport–Bossier City, LA
      Springfield, MA Springfield, MA
      St. Louis St. Louis, MO-IL
      Stockton, CA Stockton, CA
      Tampa Tampa–St. Petersburg–Clearwater, FL
      Toledo Toledo, OH
      Tucson Tucson, AZ
      Tulsa Tulsa, OK
      Virginia Beach Virginia Beach-Norfolk-Newport News, VA-NC
      Visalia, CA Visalia, CA
      Washington, D.C. Washington-Arlington-Alexandria, DC-VA-MD-WV
      Wichita Wichita, KS
      Winston-Salem Winston-Salem, NC
      Worcester, MA Worcester, MA-CT

      Rankings for large metros

      Metro Area Overall Rank Academic Growth Rank Disadvantaged Growth Rank Metro Progress Rank High School Graduation Rate Rank
      Miami 1 1 11 13 33
      Memphis 2 4 1 3 28
      McAllen, TX 3 3 20 2 14
      Atlanta 4 2 2 16 38
      Indianapolis 5 5 3 4 5
      Los Angeles 6 9 18 12 7
      San Jose 7 6 24 5 24
      Chicago 8 8 10 23 12
      Fresno 9 14 17 17 6
      New York * 10 13 5 19 25
      San Diego 11 12 19 11 20
      Virginia Beach 12 16 6 6 27
      Pittsburgh 13 11 21 24 18
      Phoenix 14 15 22 9 16
      Boston 15 10 4 40 37
      Dallas 16 19 13 39 1
      Houston 17 18 12 38 4
      Jacksonville 18 17 15 7 41
      Denver † 19 7 14 No Data 48
      Orlando 20 22 33 14 21
      Riverside, CA 21 29 25 30 3
      St. Louis 22 21 28 29 23
      Kansas City 23 28 16 35 13
      Charlotte 24 20 30 34 22
      Cleveland 25 24 41 20 19
      Columbus 26 30 26 22 17
      San Antonio 27 31 23 44 8
      Cincinnati 28 26 45 27 15
      Tampa 29 25 38 15 43
      Portland * 30 23 27 42 47
      Minneapolis 31 27 37 37 46
      Austin 32 35 31 47 2
      Sacramento 33 36 29 25 11
      Oklahoma City 34 37 34 21 10
      Detroit 35 32 8 43 40
      Nashville 36 39 39 1 26
      Washington, DC 37 34 7 33 45
      Providence 38 38 35 31 34
      Louisville 39 41 47 10 29
      Hartford 40 40 32 48 30
      Milwaukee 41 33 49 36 39
      Philadelphia 42 42 40 46 35
      San Francisco 43 45 43 18 31
      Birmingham, AL 44 46 48 32 9
      Richmond 45 43 46 41 44
      Salt Lake City 46 48 36 8 49
      Baltimore 47 44 44 50 36
      Raleigh 48 47 50 49 32
      Las Vegas 49 49 42 28 42
      Honolulu 50 50 9 45 50

      Note: Each metro represents a core-based statistical area that includes nearby suburbs, towns, and other cities. Metro area names are abbreviated for legibility; for full names of the core-based statistical areas, see Table A1 above. All outcomes are adjusted for district-level demographics.

      *Metro is missing data for its largest district for all recent observations.

      †Key data points are missing in the early years of the SEDA data; because missing early data makes it impossible to calculate metro progress, the value for that variable is imputed to align with the metro’s performance on the other components of the SLAM rank.

      Rankings for midsized metros

      Metro Area Overall Rank Academic Growth Rank Disadvantaged Growth Rank Metro Progress Rank High School Graduation Rate Rank
      Brownsville, TX 1 1 18 3 5
      North Port, FL 2 2 12 1 23
      Jackson, MS 3 5 1 6 27
      El Paso 4 3 13 33 16
      Boise City 5 4 16 15 46
      Grand Rapids, MI 6 6 8 18 31
      Fayetteville, AR 7 7 28 41 7
      Baton Rouge 8 9 2 12 41
      Corpus Christi 9 14 10 34 3
      Toledo 10 10 11 27 22
      Bridgeport, CT 11 12 6 43 13
      Visalia, CA 12 18 24 14 1
      Cape Coral, FL 13 8 21 19 47
      Deltona-Daytona 14 13 30 5 45
      Tucson 15 17 9 7 24
      New Orleans 16 24 3 13 21
      Greenville, SC 17 16 36 17 25
      Killeen, TX 18 25 14 20 2
      Charleston, SC 19 11 43 31 29
      Greensboro, NC 20 19 17 42 9
      Durham-Chapel Hill 21 15 39 35 26
      Greeley, CO† 22 21 15 No Data 44
      New Haven 23 26 7 39 34
      Modesto, CA 24 32 19 32 15
      Provo, UT 25 27 23 24 38
      Worcester, MA 26 28 5 44 40
      Springfield, MA 27 29 20 23 43
      Akron 28 33 38 22 17
      Dayton 29 30 40 38 18
      Allentown, PA 30 34 25 36 28
      Colorado Springs† 31 22 4 No Data 49
      Wichita 32 31 33 45 33
      Bakersfield, CA 33 23 22 4 50
      Omaha 34 40 42 9 11
      Des Moines 35 42 45 10 8
      Oxnard, CA 36 45 27 26 6
      Little Rock 37 36 49 30 19
      Winston-Salem 38 35 41 48 30
      Stockton, CA 39 48 32 8 14
      Lakeland, FL 40 37 26 11 48
      Madison, WI 41 20 50 49 32
      Shreveport, LA 42 38 44 25 39
      Columbia, SC 43 41 37 47 12
      Salinas, CA 44 49 31 21 4
      Knoxville 45 44 48 16 20
      Tulsa 46 47 35 40 10
      Portland, ME 47 39 46 46 42
      Ogden, UT 48 50 29 2 37
      Scranton 49 43 34 50 36
      Harrisburg, PA 50 46 47 29 35

      Note: Each metro represents a core-based statistical area that includes nearby suburbs, towns, and other cities. Metro area names are abbreviated for legibility; for full names of the core-based statistical areas, see Table A1 above. All outcomes are adjusted for district-level demographics.

      †Key data points are missing in the early years of the SEDA data; because missing early data makes it impossible to calculate metro progress, the value for that variable is imputed to align with the metro’s performance on the other components of the SLAM rank.

      This report was made possible through the generous support of the U.S. Chamber of Commerce Foundation, as well as our sister organization, the Thomas B. Fordham Foundation. We are deeply grateful to external reviewers Kristin Blagg, senior research associate at the Urban Institute and Stephen Holt, assistant professor of economics at SUNY Albany for offering advice on the rankings methodology and reviewing the interactive website.

      On the Fordham side, we thank Fordham’s associate director of research Adam Tyner for serving as the lead analyst and project manager, Amber Northern, Michael Petrilli and Chester E. Finn, Jr. for providing feedback on drafts, Pedro Enamorado for managing report production and report design, Victoria McDougald for overseeing media dissemination, Will Rost for handling funder communications, and Jeremy Smith for invaluable assistance at various stages in the process. We extend thanks to Pamela Tatz for copyediting.

      We also thank Juan Thomassie of Data-Visual for the UX design and front-end developer work on this webpage, the interactive ranking apps and the data visualizations.

      Cover photo credit: RoschetzkyIstockPhoto/iStock/Getty Images Plus

      Endnotes

      1. Although this is the first study to rank metro areas using data sources focused on student growth, other recent efforts evaluate the educational performance of districts and cities nationwide. For example, in 2020, USA Today published district-level rankings that calculated proficiency rates, teacher-to-student ratios, per-pupil spending, child poverty rates, and more to determine the districts where students were least likely to succeed. See Samuel Stebbins and Michael B. Sauter, “Making the grade?: In these school districts, students are less likely to succeed,” USA Today, March 11, 2020, https://www.usatoday.com/story/money/2020/03/11/school-districts-50-us-where-students-least-likely-succeed/5000094002. Also, in 2016 the Urban Institute used data from the Trial Urban District Assessment (TUDA) to rank twenty-one districts on student achievement, showing the differences between scores adjusted and not adjusted for demographic factors. See Kristin Blagg, “Making the Grade in America’s Cities: Assessing Student Achievement in Urban Districts” (Washington, D.C.: Urban Institute, June 2016), https://www.urban.org/research/publication/making-grade-americas-cities-assessing-student-achievement-urban-districts.
      2. The SEDA data and documentation are publicly available at https://edopportunity.org. For more, see Sean F. Reardon et al., “Can Repeated Aggregate Cross-Sectional Data Be Used to Measure Average Student Learning Rates? A Validation Study of Learning Rate Measures in the Stanford Education Data Archive,” CEPA Working Paper No. 19-08 (Stanford, CA: Center for Education Policy Analysis, November 2019), https://cepa.stanford.edu/sites/default/files/learning_rate_validation_nontechnical_summary.pdf.
      3. Aggregation is accomplished by taking weighted (by student population) averages of the standardized outcomes at the district level. See “Delineation Files,” United States Census Bureau, last revised October 8, 2021, https://www.census.gov/geographies/reference-files/time-series/demo/metro-micro/delineation-files.html.
      4. Erica Blom, Macy Rainer, and Matthew Chingos, Comparing Colleges’ Graduation Rates: The Importance of Adjusting for Student Characteristics (Washington, D.C.: Urban Institute, January 2020), https://www.urban.org/sites/default/files/publication/101635/comparing_colleges_graduation_rates_0.pdf.
      5. This measure corresponds to SEDA’s “learning rates” measure on their edopportunity.org website. For more information on constructing “cohort growth,” see Reardon et al., “Can Repeated Aggregate Cross-Sectional Data Be Used to Measure Average Student Learning Rates?”
      6. Factors such as retention of students, students leaving the metro area, and students arriving into the metro area all tend to lessen the validity of cohort growth measures, as they violate the assumption that, for example, a third grader in Miami in 2016 will be a fourth grader in Miami in 2017. However, student mobility is much less of a threat in expansive metro areas such as those ranked in this report than in individual districts or schools. Furthermore, measures of cohort growth are excluded when there is a greater than 5 percent change in the number of students in a cohort between the current year and the previous (baseline) year.
      7. When some scores are missing for one subject or the other, the reported “composite” (i.e., combination of math and ELA) will reflect a greater share of the subject with more complete data.
      8. Because economically disadvantaged students may be of any racial/ethnic group, students who are from both a traditionally disadvantaged racial/ethnic group and are economically disadvantaged are emphasized even more by this formula.
      9. Because it is not possible to separate the data on economic disadvantage by race, students who fall into two disadvantaged categories (i.e., Black or Hispanic students who are also economically disadvantaged) are effectively given double weight in this measure. Measures for economically disadvantaged students are gleaned from the SEDA data, which identifies such students based on free or reduced-price lunch program qualification, as well as those classified as economically disadvantaged in the U.S. Department of Education EdFacts data.
      10. This measure corresponds to SEDA’s “trends in test scores” measure on their edopportunity.org website.
      11. This measure is only calculated when a metro has at least three subject/grade/year observations available in both the early period and the later period, including for the largest district in the metro.
      12. Aggregation is accomplished by taking weighted (by student population) averages of the standardized graduation rates at the district level.
      13. This measure corresponds to SEDA’s “average test scores” measure on their edopportunity.org website.
      14. “Outperform or underperform their predictions” refers to residuals after subtracting the predicted values assigned by the regression models. Student demographics include SEDA values for the proportion of Asian, Black, Hispanic, White, and economically disadvantaged students; total enrollment; and the proportion of urban, suburban, town, and rural schools in the district. The analysis also accounts for changes in demographics in recent years by including the percentage of students belonging to each of the student groups and overall enrollment in 2009 through 2011, as well as an interaction term for each of these which helps account for changes in the demographics of each metro during the study period (2009–18). The SEDA demographic data are also used in the adjustments for high school graduation rate.
      15. Test scores are not reported in SEDA when fewer than 95 percent of eligible students test. Because New York has had a large “opt-out” movement in recent years, the state has more families refusing to take state tests than any other. See Keshia Clukey, “In spite of state efforts, test opt-out rates remain high,” Politico, July 29, 2016, https://www.politico.com/states/new-york/albany/story/2016/07/opt-out-numbers-for-2016-104370.