Monday, August 30, 2010

Lake Wobegon vs. Gold stars

From the LA Times story about the most effective teachers in their value-added analysis (here):

"No one is ever really singled out, neither good nor bad," said Pinto. "The culture of the union is: Everyone is the same. You can't single out anyone for doing badly. So as a result, we don't point out the good either."

"When I worked at a bank, I was employee of the month," he added. "For LAUSD, for some reason, it's not a good thing to do."

What do y'all think: is this a fair portrayal of teacher union culture? If so, would you rather live and work in Lake Wobegon, where all the teachers are above average, or in a world where it's possible for a really excellent teacher's work to be pointed out (with the logical corollary that some of the teachers are not as excellent)?

My previous life as a business consultant biases me away from the traditional union philosophy that all workers are interchangeable and should only be distinguished based on seniority -- I've seen it sap any incentive to be creative or original or excel. On the other hand, unions exist for very real and legitimate reasons -- if that business consulting company had unionized, we might have had a livable work-life balance instead of barn-burner hours because the drive (internal and external) to excel was always greater than the drive to sleep.

(Also interesting in that story -- the teachers who get the best standardized test score improvements do NOT see themselves as teaching to the test; it's a byproduct not a goal.)

Sunday, August 29, 2010

Let's give them something to talk about...

(Thanks to my friend who lives in LA and keeps pointing out interesting follow-up articles to me.)

First, the LA Times extended the value added analysis to schools rather than just teachers, finding that schools serving similar demographics can have pretty different value-added results on their tests based partially on school leadership and focus on helping individual teachers improve and partially on curricular and instructional focuses. Here.

Then the Times printed opinions on value-added analysis from a variety of education leaders and professionals: Here. I maybe cynical, but the responses seem exactly what you'd expect. (All responses paraphrased by me.)

  • The USC professor: "this just encourages teaching how to take tests, we need to address poverty instead."
  • The charter boosting philanthropic rep: "we should use the data we have and make it better, and make it public to increase transparency."
  • The first LA Board of Ed member: "we don't want to drive teachers to teach to the test, and we support transparency, but how dare the LA Times go airing our dirty laundry."
  • The CEO of a charter management organization: "student test scores should be one part of measuring teacher effectiveness, and should be used to help teachers improve and (after context-setting) with families and communities; value-added measures work better for elementary than high school where you might take a single subject for only one year."
  • The UCLA professor: "value-added methods have a lot of problems and the LA Times shouldn't go ranking every teacher in LA with a single number next to their name -- it's too complicated for that to be fair or helpful."
  • The second Board member: "we were working on teacher effectiveness, incorporating all the stakeholders, and we need more cooperation from the unions and the legislature to make any changes happen."
  • The educational nonprofit exec director and parent: "The great teachers will be fine and something needs to be done to get rid of the really inferior teachers that have been hiding behind the union; maybe this will shame them into going."
  • The president of the teacher's union: "All the Times has is a hammer, so everything looks like a nail -- we need more tools (better administrator observations, less paperwork, more funding), a better definition of student learning and metrics of it, and more guidance for teachers on how to improve their practice"
  • CA Secretary of Education: "Bravo to the Times for forcing this conversation on what metrics to use to measure effective teaching, the union should stop complaining and start working on a better alternative."
  • Parent representative: "standardized tests are not everything, but this data should be public unless they're going to be used by the district for private personnel processes"
  • Third Board member: "We've been making a lot of progress on getting rid of bad teachers in the past two years, really!"

What I'm still missing is the dialogue -- not just, "this is what I believe" but "I hear what you believe and this is why I disagree." I'd love to get all these people in a room (and also the key LA Times reporters and editor) and see if a constructive conversation could be fostered. Wouldn't it be great to find out what they might come up with?

Thursday, August 19, 2010

Fortune favors the bold (Or, the best defense is a good offense)

So, Randi Weingarten (president of the American Federation of Teachers) thinks parents should have access to individual teacher's performance evaluations, and maybe those performance evaluations should take into account value-added test score data, but should not entirely be comprised of such data. Therefore the LA Times is being a big, scary, sledgehammer-wielding meanie by threatening to publish individual teacher's "primitive" and "rudimentary" value-added scores.

I actually think she's mostly right -- it would be better to (a) give parents transparency on teacher's performance evaluations and (b) have those performance evaluations include both standardized test score impacts and other metrics such as classroom observations (ideally unannounced) and student portfolio assessments. Ideally, this would include through a controlled, well understood process whereby the district calculated each teacher's value-added score, and discussed it with teacher so they knew what it meant, and how to use it to make improvements.

I also still think that Ms. Weingarten, the district, and the teachers would be in a lot better position if they were proposing this first rather than using it as a reactive defense to the Times. There are interesting questions in here for any administrator, teacher, parent, or local journalist:

  1. What are your local public school districts currently doing with their standardized test score data? What more could they be doing to really understand your students' learning and your teachers' effectiveness?
  2. What level of transparency does your local district currently offer on teacher performance? Is that sufficient for you and your community?

If the answer to the first question is "not nearly as much as they could be" and the answer to the second question is "not much", then it seems like there is a big, gaping opportunity in your local community, and whoever takes the initiative to come up with a proposed answer first is going to have the most leverage in controlling that debate going forward. The lesson here is that districts and teacher's unions can't just bury their head in the sand and hope the standardized test scores will go away because we don't like them -- fortune favors the bold. Figure out how to use the data in a way that makes sense, then broadcast your answer to heavens. If the LA Unified School District and teachers' union had taken the initiative here, the LA Times would be using the teacher-approved, school-district controlled approach to evaluating teacher effectiveness to write their stories.

What story do you want your local paper to publish?

Wednesday, August 18, 2010

Value added

This is a fascinating article -- I recommend reading it. Go ahead, I'll wait.

To sum up, the LA Times took data from the Los Angeles Unified School District (LAUSD) on year-to-year test score changes for each student and hired an outside consultant to calulate teacher effectiveness by the "value add" of each teacher, averaged over all their students, over several years.

How this works: if Jane Doe was a 60th percentile student last year and after being in your class for a year she tested at the 65th percentile, you're given credit for helping her raise her scores, and if she drops to the 55th percentile, your effectiveness is questioned. But of course, other things could have been going on in Jane's world that year, you weren't the only factor. Therefore, the change in relative scores of individual students is averaged over a whole class (or several classes) of students, over several years, and the resulting trends are probably a valid reflection of your academic effectiveness as a teacher relative to your peers.

The Times found several interesting things:

  1. Fairly large variations in the effectiveness of teachers within each school, much larger than the variation between schools.
  2. Very little correlation between teacher experience, education, training, and effectiveness.
  3. Very little correlation between student race, wealth, English proficiency, or previous performance and teacher effectiveness.
  4. A disconnect between perception of effectiveness based on dedication and professional accomplishments and demonstrated academic ineffectiveness based on the data analysis.

In some ways the most interesting aspect of all this is that the LAUSD had the data to do this, but despite urging by their internal experts chose not to apply the data this way. This has left them open to a story like this LA Times piece, which is now the loudest voice on how this data can and could be used -- the teacher's union and the school district have lost control of the dialogue by holding back on this. This has also led to situations where individual principals are trying to estimate the results using "back of the envelope" calculations. Those principals have found that addressing the low effectiveness teachers is challenging -- coaching helps, sometimes. Figuring out what to do with this data is going to be challenging, but ignoring it isn't going to fix the problems.

For the record, this is a measure of teaching effectiveness I, as a teacher, think I can live with. It focuses on my individual impact without penalizing me for whatever experience my students had in prior years.

The article ends by bemoaning the lack of transparency on individual teacher effectiveness for parents. I do find it strange that in a world where I can get 15 reviews on a vet, a restaurant, or a dentist before choosing to use their services, this kind of data shouldn't be available to parents. What do you think?

(I found this article from a post on Eduwonk, which has a generally similar take to mine, but thinks the story should have left the teachers anonymous and questions the finding that high-poverty schools do not have a higher concentration of low-effectiveness teachers.)

Monday, August 16, 2010

Amazing Schools

Via Practical Theory, who is principal of the Science Leadership Academy, one of the amazing schools, the Ladies Home Journal put together profiles of ten pretty inspiring schools.

I think the diversity of the models here is fantastic. In no particular order:

  • An inclusion-focused elementary school in Boston
  • A recycling and conservation-focused high school in California
  • A KIPP elementary school in Houston
  • A Brooklyn high school turning around urban education with small classes and career seminars
  • A marine-focused high school in Florida
  • An expeditionary learning K-8 school in Denver
  • A grades 6-12 boarding school in DC
  • A grades 6-12 self-directed learning environment in Minnesota
  • A high tech, inquiry-based science-focused high school in Philadelphia
  • A public Montessori junior & senior high (grades 7-12) in Cincinnati

It would probably be an amazingly creative and challenging opportunity to teach at any of those schools. However, none of them is in Michigan. So, two questions -- what are the Michigan schools that are experimenting those kinds of learning models? And if we were going to start a school, which of those would be the most exciting models to emulate?

Thursday, August 12, 2010

Singapore Math

I've come across the term Singapore Math in a few different contexts in the last few days:

Context one: homeschooling mom describing her seven and half year old's planned curriculum for next year

Context two: a friend of mine who's starting to homeschool her 4.5 year old son also mentioned it (independently of Context 1)

Context three: a KIPP math teacher in DC walks through a Singapore Math-based model approach she learned at their national conference

Context four: the same math teacher posted on where Singapore math sits in the traditional vs. reform spectrum (with bonus video of Tom Lehrer's New Math, one of my favorite geeky songs ever)

All of this finally led me to look up what exactly this Singapore Math stuff is. Here, let me wikipedia it for you. The basic impression I got from the model drawing example posted on Math Rules was that it was visual algebra. The description on wikipedia is more enticing... the part that specifically grabbed my attention and made me post was this: "The principle of teaching mathematical concepts from concrete through pictorial to abstract. For example, introduction of abstract decimal fractions (in Grade 4) is preceded by their pictorial model of centimeters and millimeters on a metric ruler, but even earlier (in Grades 2 and 3) addition and subtraction of decimals is studied in the concrete form of dollars and cents."

This actually sounds to me like it could potentially be a really cool merger of the kind of concrete arithmetic that Lockhart's Lament would ask you to start with, leading into the kinds of applied problems that dy/dan is great at coming up with, all the while concentrating on the "why" beyond the "how".

What do you think? Have you had any experience teaching with Singapore Math? Is it the "New new math"?


Tuesday, August 10, 2010

Replacing school with the Khan Academy?

Have y'all heard of the Khan Academy? It seems to be one man, Salman Khan, trying to provide a basic math, science, and finance education to the world, for free, via video. (I learned about it from the video in this post by dy/dan, which I recommend.)

What I actually think is most interesting about it, from a pedagogical perspective, is how little Khan does with the technology. It seems like the lessons mostly consist of him talking on audio, and writing on a virtual chalkboard (with multiple colors of chalk!) in the video. From a casual survey of the videos, he occasionally pulls in a screenshot of a website, but given all the fascinating things you could do with pictures, video, and interactive content, that seems pretty limited. It is, in essence, a basic chalk-talk put on the internet. And the curriculum seems largely based on your average standard textbooks in each field. Heck, the video I've linked to below has him solving a textbook word problem from an Algebra I book. Khan is apparently experimenting with an open-source web-based "Exercise Application", used through a google login, that seems to basically be an automated workbook.

This raises so many questions: Is this effective? Could it be useful in a classroom? ("Here, Jimmy, I see you didn't get Solving Systems of Equations by Graphing when I explained it -- maybe this video will help? And then do these practice problems online.") Does it replace a classroom? (Does it depend on the classroom being replaced?) Is it an acceptable substitute when classrooms aren't available? (My physics students in Ghana didn't have a chemistry teacher -- would these videos have been better than just having the textbook?)

I have to admit that I'm having a knee-jerk reaction of, "of course you need a live teacher to explain in a way that accommodates the particular student sitting in front of you and their prior knowledge and speed of learning!", but I'm trying to acknowledge my own bias as a teacher-in-training and be open-minded here. I think at the end of the day, because of the approach and the non-interactivity of the videos they're basically another text -- another way of getting at the material that's presented in the textbook. Can you learn from them? Yes, just as you could learn from reading the a textbook. Do they have advantages over a textbook? Yes, in that they're free to replicate and deliver (as long as you have a working internet connection), and they may work better for those who learn more easily from watching and listening than reading. But they also have all the limitations of static textbook -- they're delivered to a single, generic audience and not customizable based on prior knowledge, areas of interest, speed of learner / need for elaboration and multiple explanations, context of the learner, or the community in which the learner is learning.

There's another interesting question here, which is does Mr. Khan need to be certified or authorized to teach this material? Is he qualified in terms of content knowledge, and pedagogical knowledge? The website doesn't list any of his qualifications. One of the linked video profiles of him and the project mentions that he used to work for a hedge fund before launching this project. In some ways, he's the uber-Teach for America kid, reaching out to the underserved of the (English-speaking) world, rather than just the underserved of the US, figuring that what he's learned along the way for content and the curriculum inherent in textbooks qualifies him to do this. (Wait a minute -- he is me, when I taught physics in Africa with the Peace Corps!) Nothing I saw in the few videos I watched seemed wrong, but there also doesn't seem to be any quality control on the content. Does that matter? Is he filling a gap that should have been filled by the formal educational community? Yes, MIT has it's OpenCourseWare, and I think other universities are also starting to jump on that bandwagon, but where's the K-12 version?

Sunday, August 8, 2010

Replacing school with World or Warcraft?

So, James Gee says school should be more like games.

Bruce Everiss says games are going to replace school. (Or at least, classrooms and teachers as we know them now.) Using systems such as smart.fm, which personalize the rate of learning and review of information to each student.

Personally, I think smart.fm looks interesting for learning factual knowledge at the remember/recall level... not sure about any of the higher orders of the taxonomy. Bruce thinks teachers will evolve into mentors as students move into the real world and try to apply their computer-taught knowledge.

What do you think?

Friday, August 6, 2010

Standardized tests, school structures, and funding

Warning: thinking below is messy and evolving. But writing is helping me work my way through it, and any comments you leave can also be part of that process!

From the introduction to Teach Like A Champion by Doug Lemov (bolding mine):

So let us assume that students need to have both kinds of skills. They need to be able to read and discuss Shakespeare, but they also need to be able to read a passage they've never seen before and effectively make sense of its meaning, structure, and craft. They need to be able to write a short paragraph giving evidence to support a conclusion. They need to be able to solve for x. Most state tests do an effective job of measuring these skills, and while students who can demonstrate them are not yet fully prepared for college, there are no students who are prepared for college who cannot demonstrate them.

It's also worth noting that teachers who are better at teaching the skills measured on state tests are most often also the teachers who are effective at teaching higher order skills. I know this because within Uncommon Schools, when we correlate the success of our students on tougher internal assessments (essay writing assessments that are far more demanding that state tests, for example), there is a strong correlation between both the teachers and students whose results show the most growth and achievement on the two types. Furthermore, our teachers who achieve the strongest results from state assessments also have the strongest results in ensuring our students' entry into and success in college. In short, student success as measured by state assessments is predictive of their success not just in getting into college but of their succeeding there.

Finally, the correlation between success on even more straightforward assessments (nationally normed test scores) and ultimate academic success should be instructive to us. I often meet educators who take it as an article of faith that basic skills work in tension with higher-order thinking. That is, when you teach students to, say, memorize their multiplication tables, you are not only failing to foster more abstract and deeper knowledge but are interfering with it. This is illogical and, interestingly, one of the tenets of American education not shared by most of the educational systems of Asia, especially those that are the highest-performing public school systems in the world. Those nations are more likely to see that foundational skills like memorizing multiplication tables enable higher-order thinking and deeper insight because they free students from having to use up their cognitive processing capacity in more basic calculations. To have the insight to observe that a more abstract principle is at work in a problem or that there is another way to solve it, you cannot be concentrating on the computation. That part has to happen with automaticity so that as much of your processing capacity as possible can remain free to reflect on what you're doing. The more proficient you are at "lower-order" skills, the more proficient you can become at higher order skills.

So, to sum up, Lemov proposes that state standardized test scores

  1. measure knowledge and skills that are necessary but not sufficient for ongoing academic success,
  2. are predictive of success with higher order skills and higher education success
  3. measure foundational skills that need to be mastered in order to enable higher order thinking and skills.

This makes intuitive sense to me, although I would of course love to see the data supporting it.

I find this excerpt interesting because it articulates why I feel frustrated every time we start discussing in class whether using standardized tests to measure the effectiveness of schools is valid. On the one hand, no, standardized tests clearly cannot measure all the higher order thinking skills that we wish to develop in our students. On the other hand, if a school's students cannot pass the standardized tests, there is a problem. So while having 100% of your students score proficient on a standardized test doesn't necessarily mean that a school is doing a great job, having only 20% or 50% of your students score proficient on a standardized test does, in my mind, mean that the students are not acquiring the foundational competencies they will need in life, and are unlikely to be developing amazing higher order thinking skills while failing to be able to pass the standardized test.*

The question then becomes, can you blame the school for this? I think there's evidence that the traditional public school structure (180 days a year, 7 hours/day) often does an adequate job of educating middle and upper-middle class kids who mostly have a reasonably stable home life and families with enough money to not be worried about food, clothing, housing, and basic security (e.g. my own public school education in Bloomfield Hills). I think there's also solid evidence that that same structure is often inadequate in the face of the poverty and social disadvantages associated with the population in many urban school districts (e.g. Detroit).

So what happens when you have an inadequate structure and you impose high stakes standardized testing upon it? One of two things: you change the structure to something that works better, or you try to game the testing system. Charter schools like KIPP and Uncommon Schools change the structure -- longer school days, longer school years, highly disciplined approaches to teaching and learning -- and see good test results, which the author above ties to improved higher order thinking. There are of course other, more progressive approaches to changing the structure, such as adopting place-based curricula or addressing health issues and incorporating the broader community like the Harlem Children's Zone. However, the schools that are stuck with rigid curricula and inadequate instruction time and resources for their students to overcome the disadvantages the kids bring to school have no recourse but to try to game the system. This results in the much-decried teaching to the test, or focusing all extra resources on the "bubble" kids who will make the difference between meeting the average yearly improvement targets or not, or sometimes even cheating by teachers and administrators.

So if the end goal is to make the public education system serve all students at a level that at least lets them meet the minimum standards enshrined in standardized testing (and based on the arguments Lemov makes, I actually think that's a reasonable goal), the key question becomes how can we change the structure of the schools that aren't currently achieving that minimum standard. One approach is the extended school day and year and highly intensive teacher development and involvement embodied in the most successful charters. This is the approach the Obama administration seems to have latched onto, but it seems to have significant barriers to broad implementation in a public education system dominated by unions that would demand to have their teachers compensated for the extra teaching time in an economic climate where raising funding levels would be very difficult.

There are clearly other innovative approaches out there -- inquiry-based learning, place-based schools, and the Harlem Children's Zone. However, the advocates of those models don't seem to be making the case for scaling up and rolling out those models in other contexts, in a language that resonates with those awarding large amounts of private or public funding. I'm going to speculate that part of the barrier here is that the advocates of these models tend to be of a more progressive mindset, and tend to dismiss standardized tests. If instead the advocates showed that their approach of teaching beyond the tests also prepared students to meet the "table stakes" of proficiency in standardized test, they would have a much clearer way to "demonstrate success" and attract both the private funding that has been propelling organizations such as KIPP and the public funding currently being awarded through the Race to the Top and the Investing in Innovation (i3) funds.

What do you think?

~~~~~~~~~~~~~~~~~

* A 90% proficient rate might be okay, on the theory that 10% of your students might have test-specific disabilities that make it difficult for them to demonstrate their knowledge and skills in the standardized testing environment.

Wednesday, August 4, 2010

Muddling through the ideologies

The more I read, the more confused I get.

It seems like there are (at least) two schools of thought out there in the educational universe:

School of Thought A (Let's call it "conservative", although there's pretty clear evidence that it's being implemented as much by the Obama administration as it was by the previous administration): "Based on the metrics we have available (such as the National Assessment of Education Progress), the quality of public education outcomes in the U.S. has declined over the years. Our schools are not serving our students well, either compared to prior years or compared to other countries. This seems to be correlated with a shift away from traditional, canonical curricula to mushy, ill-defined, "child-centered" pedagogies. We should fix this, through strong state and national standards, and the way we will know whether we have fixed it is through demonstrated improvement in those same metrics (standardized test scores)."

School of Thought B (Let's call it "progressive"): "People with conservative agendas have used unsuitable instruments (standardized tests) to measure educational outcomes and manipulated the data to make it seem as if public schools are failing, when actually they are doing just as well as they ever were. The real problem is that our schools are inequitably funded and resourced. We should fix this, through better funding of schools for all children, and we should implement progressive-child centered learning strategies rather than national standards. We should measure outcomes through individual portfolios and narratives of student progress. Standardized tests drive schools to teach how to take tests rather than how to learn, and should be eliminated."

As someone fairly new to these debates, I have the following questions:

  1. Is the above a fair summary of the two schools of thought in question? (Acknowledging that this applies some fairly broad brush strokes and probably neglects many fine points of debate and dissension within the ranks.)
  2. What do the national and international standardized test trends really say? How can both sides claim with confidence that the NAEP data supports their claims when properly analyzed? Is there data that supports a decline in quality of U.S. public schools either relative to their own past performance or relative to other nations? Does this data hold up when disaggregated by socio-economic status, parental education levels, english-language learner status, special education status, and other factors external to the school but with potentially strong influence on student outcomes?
  3. Within the conservative school of thought, I have not come across much debate on whether standardized tests measure the educational outcomes that really matter, and what effect their imposition has on the outcomes being measured. Which is to say, do we acknowledge that this is driving 'teaching to the test", and if so, do we think it's a good thing? If not, what should we do about it? Does this discussion exist? If so, where?
  4. Within the progressive school of thought, I have not come across much debate on what metrics students, parents, communities, and society at large should use to measure schooling outcomes in a way that is comparable across schools, districts, and states. Which is to say, if not through standardized tests, then how should we identify the bright beacons of hope and the areas that need improvement and use those to continuously improve our education system? Does this discussion exist? If so where?

I would love to be pointed to reading material that addresses these questions. Specifically, I would love to read a non-partisan take on #1 and #2 (if such a thing exists). And I would love to read an article by a leading conservative scholar addressing #3 and a leading progressive scholar addressing #4.

Any recommendations?

Assessment without grades


Continuing my obsession with assessment, today I've found myself entangled in a bunch of posts about whether grades are useful, and if not, what they should be replaced with.

Jason Bedell makes the excellent point that a traditional letter grade or percentage of points aggregated from various assignments conflates several factors

  • Mastery of the material (Can I meet X learning standard?)
  • Timing of the learning (Could I meet it for the first homework assignment? The quiz? The final test? The week after the final test?)
  • Student organization and motivation (Did I show up for class / turn in my assignments / review for the test?)

(Or if you're the pointy-haired boss, two of the three may be sufficient.)

Because it's traditional, the A/B/C system is fairly well understood by everyone, and I would even argue that future colleges and employers are comfortable with the conflation of learning / timing / organization, because speed, mastery, and organization are also all important for success in those contexts.

However, there's a fairly good point to be made that in terms of fostering real learning on the parts of all our students, and making conversations and partnerships with parents easier, untangling those three elements is useful. Jason does it by using standards-based grading, where each student is assigned points for meeting each learning standard (e.g. "Define and classify special types of quadrilaterals”) between 1 ("attempts the problem") and 4 ("demonstrates thorough understanding). He doesn't think students should be penalized for learning more slowly, so there aren't standards around how fast you master the material. If organization is important, then it would get it's own standard or set of standards, rather than being conflated with mastery of the material.

I'm not exactly sure how this would look in a classroom I was in charge of, but I'm happy to have the food-for-thought. (More thinking on the topic of abolishing grading here, which I'm hoping to eventually work my way through.)

On a related note, Dan Pink says what matters for job performance and satisfaction (once you're paying people enough that they don't have to worry about money) is:

  • Autonomy
  • Mastery
  • Purpose



If you release students from your timetable (autonomy), focus on mastery of standards in your grading scheme, and help students find the relevance of the material to their lives (purpose), does that satisfy all the elements?

Also, what does that look like for teachers? In public schools, NCLB has certainly encouraged a move away from autonomy towards standardized curricula, and a focusing of purpose on test scores. Not super-hopeful for future job satisfaction in that context.

Tuesday, August 3, 2010

If I were teaching economics...

I would use this video as part of my supply & demand unit:

http://www.npr.org/blogs/money/2010/06/26/128134222/haiti-rice

Getting readers and viewers interested in the developing world - an educational analogy

In my internet wanderings today, I've come across two interesting pieces in the Guardian. Germain Greer reviewing the book, Half the Sky, and Jonathan Freedland discussing media portrayals of the developing world.

Both pieces hinge on the essential dilemma of how to get the media-consuming public in the developed world aware of and engaged in the issues of the developing world. They discuss how this is often accomplished by resorting to oversimplified caricatures of victimized or heroic developing-world characters, losing the complexity and color of life in the country being focused on. Often the reader or viewer is engaged through the use of heroic avatars -- the Americans or Europeans who have swooped in to save the hapless locals -- at the expense of portraying the ability of the locals to save themselves. And often the complexities (and complicities) of the interactions between the developed and developing worlds are neglected.

Life in the developing world is abstract, complicated, and messy for most media consumers in the developed world, and the media therefore has a hard time engaging its audience in a well-rounded, complex, authentic view of that world.

There's an educational analogy here. Africa : US TV viewers :: physics : secondary school students.

Greer mostly points out problematic aspects of Half the Sky's approach to engaging its readers, without proposing alternatives. Freedland, however, proposes three ways for the media to engage consumers in a more authentic view of the developing world.

  1. Find the active drama going on within the story, "not a crude battle of victims against villains, but [the] subtle mix of conflicting, shifting political interests."
  2. Replace occasional, flashy, "parachute-in", front-page stories with a sustained stream of small, "inside-page" stories that over time piece together a well-rounded mosaic of the complexity of the subject.
  3. Insist that "these foreign stories are not so foreign" -- unearth the links between the reader's world and the world of the story. (His example ties Lithium used to power laptops and iPhones to the environmental problems around Chile's largest lithium mine.)

So, to carry forward the analogy, what does that look like for Physics (or almost any subject):

  1. Find the active dramas in the subject -- don't just talk about gravity as a given, talk about how we came to understand gravity the way we do now over time; and don't just teach science as a set of facts in the textbook, introduce current scientific debates (dark matter, the expansion of and eventual fate of the universe, etc.).
  2. Don't just cover a topic once and consider it taught. Come back to important topics repeatedly over time, touching on them in relation to new topics and new applications. Cover potential energy with relation to the physics of motion, and come back to it in the context of electrical circuits and the structure of the atom.
  3. Make the connection between the subject being taught and students' daily lives. Voltage isn't just a symbol in a circuit diagram -- it also determines what kind of battery a student needs for a given electronic device. And distance = velocity * time isn't just an equation, it tells you how long it will take you to get to school each day.

In the end, I actually feel pretty lucky. I think physics, earth science, and math are all much easier to teach than the politics, religion, social issues, and history of Africa. And I get a whole school year of classes to work with, rather than limited TV time or print space. I just need to get my students to feel the same way.

Monday, August 2, 2010

How not to do it

I'm 75% done with the Bloodborne Pathogen Training we're required to take before going into the classroom. Things that are educationally wrong with this training:

  1. It's completely unengaging -- they're discussing life-and-death stuff like how not to get HIV, and they've managed to make it boring
  2. By making it a narrated slide presentation where everything the narrator says is illustrated on the slide
  3. With lots of bullets.
  4. It's non-interactive -- so far I'm on slide 34 of 39 and at no point have I been required to take any mini-quizes on whether I've absorbed any of the key points. All I have to do is listen to the narrator drone and hit "next" at the right point.
  5. It's unmotivating -- it exudes, "you're watching this slide show because it's the cheapest and easiest way for your employer to fulfill some bureaucratic mandate".

I'm 15 minutes into a stupid 20 minute presentation, and I'm so bored I'm venting about it here because going back to it sounds so painful.

What would a good blood-borne pathogen training look like? In a live setting there would be some story-telling of experiences people have had with unsafe sharps or blood pouring out of some HIV-positive students' nose. And then the class would propose from prior knowledge all the ways you can get infected, with the teacher filling in any blanks / correcting any misconceptions. And then when we had to be told that our employer will have a "exposure control plan" we'd actually get copies of one and have to find relevant information in it for various emergency scenarios. And maybe at the end of it I would have learned something other than, "online slideshow trainings are mind-numbing."

If it's important enough to have us do a training on this, why isn't it important enough to do it well? And how on earth can a school of education inflict such a horrible example of pedagogy of its students?

Update: The last three slides did have true or false questions that could be answered by most untrained chimpanzees. Hurrah interactivity! Although none were as good as the radioactive materials handling training at Fermilab, where one of the questions was (I wish I was joking): "You can safely eat a radioactive source. (T/F)"

P.S. I wonder if this is where all the rhetoric at BP about "Safety First" goes bad and turns into the world's biggest oil spill ever. Safety is SO IMPORTANT that we must inflict hugely boring (but consistent!) training on our employees that treats them like chimpanzees and enforces in them that safety is about checking boxes, not applying your actual brain and spidey senses to averting disaster.

Sunday, August 1, 2010

EDUC 504: Blog accounting

Just to make this as clear and easy as possible for Kristin and Jeff when they start grading the ED504 blog assignment:

Blog posts responding to each week's readings and post-reflection on class are pretty clearly labelled in their titles, but for ease of navigation:

June 30: Reflection

July 9: Readings, Reflection

July 16: Readings, Reflection

July 23: Readings, Reflection

July 30: Readings, Reflection

(Yes there are multiple other posts on this blog, and will continue to be, I hope... it's a multitasking blog -- part blogging assignment for the EDUC 504 - Teaching with Technology and part just Emily Blogging Ed School.)

Week of July 16

Edublogger posts read:

Comments left: I commented on Chris Sessum's post discussed in class: Who's Cheating Whom?

Week of July 23

Edublogger posts read:

Comments left: I commented on the Yellow Lights post -- I thought it was potentially a good example of place-based education, which is one of the reform topics in 649

Week of July 30

Edublogger posts read:

Comments left: I commented on the Tower of Trimph post (currently awaiting moderation -- I was happy to see an example of extrinsic motivation (stickers) transitioning to intrinsic motivation with a challenging student)

Response post: My response to the Learning is Messy post is here.