Skip to:

education data

  • Update On Teacher Diversity Data: Good News, Bad News, And Strange News

    Written on December 5, 2018

    A couple of months ago, we released a report on the collection and availability of teacher race and ethnicity data, based on our late 2017 survey of all 51 state education agencies (SEAs) in the U.S. We asked them two simple questions: 1) Do you collect data (school- or district-level) on teacher race and ethnicity; and 2) Do you make the data public, and how (i.e., by request or on your website)?

    Our findings, in brief, were that the majority of states both collected and made public school- and district-level data on teacher diversity, but that six states did not collect the data all, and another four states collect the data but do not make them available to the public.

    Since the publication of that report, we’ve come across significant information/updates pertaining to three states, which we would like to note briefly. We might characterize these three updates as good news, bad news, and strange news.

    READ MORE
  • The Teacher Diversity Data Landscape

    Written on September 20, 2018

    This week, the Albert Shanker Institute released a new research brief, authored by myself and Klarissa Cervantes. It summarizes what we found when we contacted all 51 state education agencies (including the District of Columbia) and asked whether data on teacher race and ethnicity was being collected, and whether and how it was made available to the public. This survey was begun in late 2017 and completed in early 2018.

    The primary reason behind this project is the growing body of research to suggest that all students, and especially students of color, benefit from a teaching force that reflects the diverse society in which they must learn to live, work and prosper. ASI’s previous work has also documented that a great many districts should turn their attention to recruiting and retaining more teachers of color (see our 2015 report). Data are a basic requirement for achieving this goal – without data, states and districts are unable to gauge the extent of their diversity problem, target support and intervention to address that problem, and monitor the effects of those efforts. Unfortunately, the federal government does not require that states collect teacher race and ethnicity data, which means the responsbility falls to individual states. Moreover, statewide data are often insufficient – teacher diversity can vary widely within and between districts. Policymakers, administrators, and the public need detailed data (at least district-by-district and preferably school-by-school), which should be collected annually and be made easily available.

    The results of our survey are generally encouraging. The vast majority of state education agencies (SEAs), 45 out of 51, report that they collect at least district-by-district data on teacher race and ethnicity (and all but two of these 45 collect school-by-school data). This is good news (and, frankly, better results than we anticipated). There are, however, areas of serious concern.

    READ MORE
  • We Can't Graph Our Way Out Of The Research On Education Spending

    Written on April 17, 2018

    The graph below was recently posted by U.S. Education Department (USED) Secretary Betsy DeVos, as part of her response to the newly released scores on the 2017 National Assessment of Educational Progress (NAEP), administered every two years and often called the “nation’s report card.” It seems to show a massive increase in per-pupil education spending, along with a concurrent flat trend in scores on the fourth grade reading version of NAEP. The intended message is that spending more money won’t improve testing outcomes. Or, in the more common phrasing these days, "we can't spend our way out of this problem."

    Some of us call it “The Graph.” Versions of it have been used before. And it’s the kind of graph that doesn’t need to be discredited, because it discredits itself. So, why am I bothering to write about it? The short answer is that I might be unspeakably naïve. But we’ll get back to that in a minute.

    First, let’s very quickly run through the graph. In terms of how it presents the data, it is horrible practice. The double y-axes, with spending on the left and NAEP scores on the right, are a textbook example of what you might call motivated scaling (and that's being polite). The NAEP scores plotted range from a minimum of 213 in 2000 to a maximum of 222 in 2017, but the graph inexplicably extends all the way up to 275. In contrast, the spending scale extends from just below the minimum observation ($6,000) to just above the maximum ($12,000). In other words, the graph is deliberately scaled to produce the desired visual effect (increasing spending, flat scores). One could very easily rescale the graph to produce the opposite.

    READ MORE
  • Our Request For Simple Data From The District Of Columbia

    Written on December 2, 2016

    For our 2015 report, “The State of Teacher Diversity in American Education,” we requested data on teacher race and ethnicity between roughly 2000 and 2012 from nine of the largest school districts in the nation: Boston; Chicago; Cleveland; District of Columbia; Los Angeles; New Orleans; New York; Philadelphia; and San Francisco.

    Only one of these districts failed to provide us with data that we could use to conduct our analysis: the District of Columbia.

    To be clear, the data we requested are public record. Most of the eight other districts to which we submitted requests complied in a timely fashion. A couple of them took months to fill the request, and required a little follow up. But all of them gave us what we needed. We were actually able to get charter school data for virtually all of these eight cities (usually through the state).

    Even New Orleans, which, during the years for which we requested data, was destroyed by a hurricane and underwent a comprehensive restructuring of its entire school system, provided the data.

    But not DC.

    READ MORE
  • Contingent Faculty At U.S. Colleges And Universities

    Written on September 9, 2016

    In a previous post, we discussed the prevalence of and trends in alternative employment arrangements, sometimes called “contingent work,” in the U.S. labor market. Contingent work is jobs with employment arrangements other than the “traditional” full-time model, including workers with temporary contracts, independent contractors, day laborers, and part-time employees.

    Depending on how one defines this group of workers, who are a diverse group but tend to enjoy less job stability and lower compensation, they comprise anywhere between 10 and 40 percent of the U.S. workforce, and this share increased moderately between 2000 and 2010. Of course, how many contingents there are, and how this has changed over time, varies quite drastically by industry, as well as by occupation. For example, in 1990, around 28 percent of staffing services employees (sometimes called “temps”) worked in blue collar positions, while 42 percent had office jobs. By 2009, these proportions had reversed, with 41 percent of temps in blue collar jobs and 23 percent doing office work. This is a pretty striking change.

    Another industry/occupation in which there has been significant short term change in the contingent work share is among faculty and instructors in higher education institutions.

    READ MORE
  • Getting Serious About Measuring Collaborative Teacher Practice

    Written on April 8, 2016

    Our guest author today is Nathan D. Jones, an assistant professor of special education at Boston University. His research focuses on teacher quality, teacher development, and school improvement. Dr. Jones previously worked as a middle school special education teacher in the Mississippi Delta. In this column, he introduces a new Albert Shanker Institute publication, which was written with colleagues Elizabeth Bettini and Mary Brownell.

    The current policy landscape presents a dilemma. Teacher evaluation has dominated recent state and local reform efforts, resulting in broad changes in teacher evaluation systems nationwide. The reforms have spawned countless research studies on whether emerging evaluation systems use measures that are reliable and valid, whether they result in changes in how teachers are rated, what happens to teachers who receive particularly high or low ratings, and whether the net results of these changes have had an effect on student learning.

    At the same time,  there has been increasing enthusiasm about the promise of teacher collaboration (see here and here), spurred in part by new empirical evidence linking teacher collaboration to student outcomes (see Goddard et al., 2007; Ronfeldt, 2015; Sun, Grissom, & Loeb, 2016). When teachers work together, such as when they jointly analyze student achievement data (Gallimore et al., 2009; Saunders, Gollenberg, & Gallimore, 2009) or when high-performing teachers are matched with low-performing peers (Papay, Taylor, Tyler, & Laski, 2016), students have shown substantially better growth on standardized tests.

    This new work adds to a long line of descriptive research on the importance of colleagues and other social aspects of the school organization.  Research has documented that informal relationships with colleagues play an important role in promoting positive teacher outcomes, such as planned and actual retention decisions (e.g., Bryk & Schneider, 2002; Pogodzisnki, Youngs, & Frank, 2013; Youngs, Pogodzinski, Grogan, & Perrone, 2015). Further, a number of initiatives aimed at improving teacher learning – e.g., professional learning communities (Giles & Hargreaves, 2006) and lesson study (Lewis, Perry, & Murrata, 2006) – rely on teachers planning instruction collaboratively.

    READ MORE
  • The Story Behind The Story: Social Capital And The Vista Unified School District

    Written on August 19, 2015

    Our guest author today is Devin Vodicka, superintendent of Vista Unified, a California school district serving over 22,000 students that was recently accepted into the League of Innovative Schools. Dr. Vodicka participates in numerous state and national leadership groups, including the Superintendents Technical Working Group of the U.S. Education Department .

    Transforming a school district is challenging and complex work, often requiring shifts in paradigms, historical perspective, and maintaining or improving performance. Here, I’d like to share how we approached change at Vista Unified School District (VUSD) and to describe the significant transformation we’ve been undergoing, driven by data, focused on relationships, and based in deep partnerships. Although Vista has been hard at work over many years, this particular chapter starts in July of 2012 when I was hired.  

    When I became superintendent, the district was facing numerous challenges: Declining enrollment, financial difficulties, strained labor relations, significant turnover in the management ranks, and unresolved lawsuits were all areas in need of attention. The school board charged me and my team with transforming the district, which serves large numbers of linguistically, culturally, and economically diverse students. While there is still significant room for improvement, much has changed in the past three years, generally trending in a positive direction. Below is the story of how we did it.

    READ MORE
  • Actual Growth Measures Make A Big Difference When Measuring Growth

    Written on February 25, 2015

    As a frequent critic of how states and districts present and interpret their annual testing results, I am also obliged (and indeed quite happy) to note when there is progress.

    Recently, I happened to be browsing through New York City’s presentation of their 2014 testing results, and to my great surprise, on slide number four, I found proficiency rate changes between 2013 and 2014 among students who were in the sample in both years (which they call “matched changes”). As it turns out, last year, for the first time, New York State as a whole began publishing these "matched" year-to-year proficiency rate changes for all schools and districts. This is an excellent policy. As we’ve discussed here many times, NCLB-style proficiency rate changes, which compare overall rates of all students, many of whom are only in the tested sample in one of the years, are usually portrayed as “growth” or “progress.” They are not. They compare different groups of students, and, as we’ll see, this can have a substantial impact on the conclusions one reaches from the data. Limiting the sample to students who were tested in both years, though not perfect, at least permits one to measure actual growth per se, and provides a much better idea of whether students are progressing over time.

    This is an encouraging sign that New York State is taking steps to improve the quality and interpretation of their testing data. And, just to prove that no good deed goes unpunished, let’s see what we can learn using the new “matched” data – specifically, by seeing how often the matched (longitudinal) and unmatched (cross-sectional) changes lead to different conclusions about student “growth” in schools.

    READ MORE
  • The Accessibility Conundrum In Accountability Systems

    Written on January 7, 2015

    One of the major considerations in designing accountability policy, whether in education or other fields, is what you might call accessibility. That is, both the indicators used to construct measures and how they are calculated should be reasonably easy for stakeholders to understand, particularly if the measures are used in high-stakes decisions.

    This important consideration also generates great tension. For example, complaints that Florida’s school rating system is “too complicated” have prompted legislators to make changes over the years. Similarly, other tools – such as procedures for scoring and establishing cut points for standardized tests, and particularly the use of value-added models – are routinely criticized as too complex for educators and other stakeholders to understand. There is an implicit argument underlying these complaints: If people can’t understand a measure, it should not be used to hold them accountable for their work. Supporters of using these complex accountability measures, on the other hand, contend that it’s more important for the measures to be “accurate” than easy to understand.

    I personally am a bit torn. Given the extreme importance of accountability systems’ credibility among those subject to them, not to mention the fact that performance evaluations must transmit accessible and useful information in order to generate improvements, there is no doubt that overly complex measures can pose a serious problem for accountability systems. It might be difficult for practitioners to adjust their practice based on a measure if they don't understand that measure, and/or if they are unconvinced that the measure is transmitting meaningful information. And yet, the fact remains that measuring the performance of schools and individuals is extremely difficult, and simplistic measures are, more often than not, inadequate for these purposes.

    READ MORE
  • Teachers And Education Reform, On A Need To Know Basis

    Written on July 1, 2014

    A couple of weeks ago, the website Vox.com published an article entitled, “11 facts about U.S. teachers and schools that put the education reform debate in context." The article, in the wake of the Vergara decision, is supposed to provide readers with the “basic facts” about the current education reform environment, with a particular emphasis on teachers. Most of the 11 facts are based on descriptive statistics.

    Vox advertises itself as a source of accessible, essential, summary information -- what you "need to know" -- for people interested in a topic but not necessarily well-versed in it. Right off the bat, let me say that this is an extraordinarily difficult task, and in constructing lists such as this one, there’s no way to please everyone (I’ve read a couple of Vox’s education articles and they were okay).

    That said, someone sent me this particular list, and it’s pretty good overall, especially since it does not reflect overt advocacy for given policy positions, as so many of these types of lists do. But I was compelled to comment on it. I want to say that I did this to make some lofty point about the strengths and weaknesses of data and statistics packaged for consumption by the general public. It would, however, be more accurate to say that I started doing it and just couldn't stop. In any case, here’s a little supplemental discussion of each of the 11 items:

    READ MORE

Pages

Subscribe to education data

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.