First Person

“Sharp Focus” On Teacher Prep Programs Falls Flat

Last week, the New York City Department of Education issued its first-ever Teacher Preparation Program Reports. The department was judicious in not describing the reports as an evaluation of the quality or effectiveness of the dozen teacher-preparation programs in the New York City area that collectively produce more than 50 percent of the 10,000 traditional-pathway teachers hired by the city over the past five years.

Others were not so careful. Writing in The New York Times, Javier Hernandez described the PowerPoint slides comparing the 12 programs as “scorecards,” and stated that these ed schools were being “evaluated,” a term repeated in his article’s headline. Politico also used the term “scorecard.” The Wall Street Journal described the data as “rankings,” although teacher-preparation programs were not ranked. The Associated Press described the data as “grading” the colleges and universities, and looked for “winners or losers.” The New York Post and the New York Daily News both referred to “grading” the programs. Even my own institution, Teachers College, which appears in the data, fell into this trap: the headline on the college’s webpage reads, “TC Rated in City Evaluation of Teacher Prep Programs.”

What’s the big deal? Report, description, analysis, comparison, ratings, rankings, evaluation — aren’t these all pretty much the same thing?

No, they are not, for several reasons.

First of all, we cannot view the descriptive information about New York City teachers emerging from each program as an evaluation of the program, because we have no idea if the teachers who start their careers in the Big Apple are typical or representative of all of the new teachers produced by each program. Do NYC schools attract the best or the worst of each program’s graduates? We have no idea.

Michael Jordan (Photo courtesy of Steve Lipofski, Basketballphoto.com)

If you will forgive a sports analogy  —drawn from basketball, in honor of our Hoopster-in-Chief, Arne Duncan — consider the players from the University of North Carolina at Chapel Hill who’ve entered the National Basketball Association over the past three decades. Would it be fair to evaluate UNC’s performance as a training-ground for the NBA based only on how its players perform for the Los Angeles Lakers? What about that Michael Jordan fellow, who played only for the Chicago Bulls and Washington Wizards? Should his performance be ignored? When a preparation program sends its graduates to many different destinations, we cannot evaluate its quality based on how those graduates perform in just a single destination.

Now, when Michael Jordan entered the NBA, he was drafted by the Chicago Bulls — after the Houston Rockets picked Hakeem Olajuwon and the Portland Trail Blazers chose Sam Bowie. The nature of the NBA draft is that these teams had exclusive rights to these players, who couldn’t choose to sign with any NBA team, even if they thought that other teams had more talented players, or had a better coach, or would pay them more money. Does playing on a stable team with experienced teammates and an excellent coach improve a player’s performance? It’s hard to know, for most of the time we only see how a player performs with the team that drafted him.

Public education doesn’t use a “draft” to match new teachers with schools, but in both teaching and basketball, there’s a labor market with a supply of, and demand for, new talent. Where teachers wind up and how they perform on the job aren’t entirely up to them; a teacher with specialized training and credentials may only be interviewed and hired by a school seeking a teacher with such specialized expertise. Conversely, one can scarcely fault a teacher for choosing among multiple job offers on the basis of the one that pays the best, or that has the best facilities, or that is in a desirable location. As Kata Mihaly and her colleagues and Bruce Baker of Rutgers have demonstrated, when labor markets result in a non-random distribution of teachers across schools and districts, it’s very difficult to disentangle the effects of the teacher-preparation program on teaching outcomes from the effects of school context.

For this reason, the descriptions of how the graduates of the dozen metro-area teacher-preparation programs are distributed throughout the system are hard to interpret. It’s interesting to see that the graduates of a particular program are more likely to teach in what the Department of Education refers to as highest-need schools, or that the teachers from a particular program are more likely to leave the district than those from other programs, but what do such things mean?

In fact, the comparisons across programs revealed far more similarities than differences, which very likely is reassuring to the education department, which inevitably must rely on diverse providers to supply the teachers it hires each year.

The data receiving the most attention were the ratings that graduates of the 12 programs received via the New York Student Growth Percentiles methodology developed by the State Education Department for the Annual Professional Performance Reviews. The 2011-12 methodology unfairly penalized some teachers and rewarded others, in my professional opinion, and the ratings were only assigned to the 15 percent of educators teaching either English Language Arts or mathematics in grades four through eight — scarcely a representative subset of the teachers prepared in any of the dozen programs. (And then there’s the pesky question of whether the state’s tests in 2011 and 2012 were good indicators of the most important things we want students to learn.) But some observers continue to view them as the most “objective” sources of information about teacher performance. Duncan, for example, said that the project “puts the record of preparation programs — including their impact on student learning — into sharp focus.”

The distribution of performance among teachers in New York City looks a lot like that across the state: 7 percent of teachers rated highly effective, and 6 percent rated ineffective, with the vast majority rated effective, based on the Student Growth Percentiles. And, although the Department of Education didn’t come out and say this, the distributions look very similar across the 12 teacher-preparation programs as well. A simple measure of association known as the chi-square test indicates that we cannot rule out the possibility that the teacher ratings are the same from one program to the next.

If the numbers in these Teacher Preparation Program Reports lead to deeper inquiries into what the data mean, and constructive conversations among the Department of Education and the leadership of the teacher-preparation programs, I’ll be pleased.

But let’s not mistake this for an evaluation. Or sharp focus.

This post also appears on Eye on Education, Aaron Pallas’s Hechinger Report blog.

First Person

Why the phrase ‘with fidelity’ is an affront to good teaching

PHOTO: Alan Petersime

“With fidelity” are some of the most damaging words in education.

Districts spend a ton of money paying people to pick out massively expensive, packaged curriculums, as if every one of a thousand classrooms needs the exact same things. Then officials say, over and over again, that they must be implemented “with fidelity.” What they mean is that teachers better not do anything that would serve their students’ specific needs.

When that curriculum does nothing to increase student achievement, it is not blamed. The district person who found it and purchased it is never blamed. Nope. They say, “Well, the teachers must not have been implementing it with fidelity.”

It keeps happening because admitting that schools are messy and students are human and teaching is both creative and artistic would also mean you have to trust teachers and let them have some power. Also, there are some really crappy teachers out there, and programs for everyone are often meant to push that worst-case-scenario line a little higher.

And if everyone’s doing just what they’re supposed to, we’ll get such good, clean numbers, and isn’t that worth a few thousand more dollars?

I was talking with a friend recently, a teacher at an urban school on the East Coast. He had been called to task by his principal for splitting his kids into groups to offer differentiated math instruction based on students’ needs. “But,” the principal said, “did the pacing guide say to differentiate? You need to trust the system.”

I understand the desire to find out if a curriculum “works.” But I don’t trust anyone who can say “trust the system” without vomiting. Not when the system is so much worse than anything teachers would put together.

Last year, my old district implemented Reading Plus, an online reading program that forces students to read at a pace determined by their scores. The trainers promised, literally promised us, that there wasn’t a single reading selection anywhere in the program that could be considered offensive to anyone. God knows I never learned anything from a book that made me feel uncomfortable!

Oh, and students were supposed to use this program — forced-paced reading of benign material followed by multiple-choice questions and more forced-pace reading — for 90 minutes a week. We heard a lot about fidelity when the program did almost nothing for students (and, I believe quite strongly, did far worse than encouraging independent reading of high-interest books for 90 minutes a week would have done).

At the end of that year, I was handed copies of next year’s great adventure in fidelity. I’m not in that district any longer, but the whole district was all switching over to SpringBoard, another curriculum, in language arts classes. On came the emails about implementing with fidelity and getting everyone on the same page. We were promised flexibility, you know, so long as we also stuck to the pacing guide of the workbook.

I gave it a look, I did, because only idiots turn down potential tools. But man, it seemed custom-built to keep thinking — especially any creative, critical thought from either students or teachers — to a bare minimum.

I just got an email from two students from last year. They said hi, told me they missed creative writing class, and said they hated SpringBoard, the “evil twin of Reading Plus.”

That district ran out of money and had to cut teachers (including me) at the end of the year. But if they hadn’t, I don’t think I would have lasted long if forced to teach from a pacing guide. I’m a good teacher. Good teachers love to be challenged and supported. They take feedback well, but man do we hate mandates for stuff we know isn’t best for the kids in our room.

Because, from inside a classroom full of dynamic, chaotic brilliance;

from a classroom where that kid just shared that thing that broke all of our hearts;

from a classroom where that other kid figured out that idea they’ve been working on for weeks;

from that classroom where that other kid, who doesn’t know enough of the language, hides how hard he works to keep up and still misses things;

and from that classroom where one kid isn’t sure if they trust you yet, and that other kid trusts you too much, too easily, because their bar had been set too low after years of teachers that didn’t care enough;

from inside that classroom, it’s impossible to trust that anyone else has a better idea than I do about what my students need to do for our next 50 minutes.

Tom Rademacher is a teacher living in Minneapolis who was named Minnesota’s Teacher of the Year in 2014. His book, “It Won’t Be Easy: An Exceedingly Honest (and Slightly Unprofessional) Love Letter to Teaching,” was published in April. He can be found on Twitter @mrtomrad and writes on misterrad.tumblr.com, where this post first appeared.

First Person

What I learned about the limits of school choice in New York City from a mother whose child uses a wheelchair

PHOTO: Patrick Wall

As a researcher interested in the ways online platforms impact learning and educational decision-making, I’ve been trying to understand how New York City parents get the information to make a crucial decision: where to send their children to school.

So for the past six months, I’ve been asking local parents about the data they used to choose among the system’s 1700 or so schools.

I’ve heard all sorts of stories about the factors parents weigh when picking schools. Beyond the usual considerations like test scores and art programs, they also consider the logistics of commuting from the Bronx to the East Village with two children in tow, whether the school can accommodate parents and children who are still learning English, and how much money the parent-teacher association raises to supplement the school’s budget.

But for some families, the choice process begins and ends with the question: Is the building fully accessible?

The federal Americans with Disabilities Act requires public buildings constructed after 1992 to be fully accessible to people in wheelchairs. However, most New York City public school buildings were constructed prior to that law, and high construction costs have limited the number of new, fully accessible buildings.

As a result, a shocking 83 percent of New York City schools have been found non-compliant with the ADA, according to a two-year federal Department of Justice investigation whose findings the city Department of Education largely disputes. Recently, the city’s Office of Space Management has begun surveying buildings for full accessibility, but more work remains to be done.

One parent’s struggle to find a school suitable for her son, who has a physical disability but no cognitive issues, illustrates what a major role accessibility plays in some families’ decision-making.

Melanie Rivera is the mother of two and a native New Yorker living in Ditmas Park in Brooklyn’s District 22 who shared her story with me — and gave me permission to share it with others. Here is what she told me, in her own words:

My son Gabriel is seven years old. He was born with a condition called arthrogryposis, which affects the development of his joints. His hips, knees, and feet are affected and he has joint contractures, so his legs don’t bend and straighten the way most people’s do. In order to get around, he uses a combination of crutches and a wheelchair.

Before I had my differently-abled son, I was working in a preschool for children with special needs. The kids I worked with had cognitive developmental disabilities.

Despite my professional experience, I was overwhelmed when it was my turn to help my child with different abilities navigate the public school system. I can only imagine the students falling by the wayside because their parents don’t have that background.

When I was completing my son’s kindergarten application, I couldn’t even consider the academics of the school. My main priority was to tour the schools and assess their level of accessibility.

There are only a couple of ADA-accessible schools in my district, and there was no way of indicating on my son’s kindergarten application that he needed one. When we got the admissions results, he was assigned to his zoned school – which is not accessible.

I entered lengthy and extensive mediation to get him into an ADA-accessible school. At that point, I knew I would just have to take what I could get. For families whose children have special needs, “school choice” can ring hollow.

The process of finding any accessible school was a challenge. The DOE website allows families to search for ADA-accessible schools. But the site describes most schools as “partially accessible,” leaving it up to parents to call each school and say, “What do you mean by this?”

When I called the schools and asked, “Are you a barrier-free school?” the staff in the office didn’t know what the term meant. They might reply, “Oh yeah, we have a ramp.” I’d have to press further: “But can you get to the office? Can you get to every floor in the building?” The response was often, “Oh, I don’t know.”

Even the office staff didn’t know. But for my son’s sake, I needed to know.

Gabriel deserves the full range of academic and social experiences. So every day I make sure he’s learning in the least-restrictive environment — from the classroom, to phys ed, to field trips.

I believe the Department of Education also wants to make schools accessible and to place students with different abilities in settings where they’ll flourish, but the current system is not equipped to follow through on those good intentions. While I see gradual changes, I still know that if I don’t find the best placement for my son the system definitely won’t.

At the school level, administrators should know the details of their own school’s accessibility. Teachers should learn to include children with different abilities in their classrooms. Such a commitment means recognizing the value of inclusivity — not viewing accessibility as something ADA says you must do.

Before I had Gabriel, I never thought about accessibility. I never looked at street cutouts or thought about how to enter a store with steps. We’re probably all guilty of perpetuating exclusion at one point or another.

Recognizing that will allow us to change the status quo. It will allow every individual with a physical disability to fully participate in the public school system.

Claire Fontaine is a researcher at Data & Society, a research institute in New York City focused on social, cultural, and ethical issues arising from technological development. Kinjal Dave is a research assistant at Data & Society. You can read more about their project, which seeks to better understand the ways in which diverse New York City parents draw on school performance data, online dashboards, and school review websites when researching schools for their children.