Tag Archives: data

What’s in a Stat? Using Data to Impact Small Choices

1 Aug
salutations-chart, from blog.okcupid.com

salutations-chart, from blog.okcupid.com

In a blog post titled, ‘We Experiment on Human Beings,” OKCupid founder Christian Rudder shares how the site manipulated user profiles to gain data on what led to interactions and meaningful conversations on the site.

His post is in reaction to the ruckus over the Facebook emotional contagion study, but I thought it was much more interesting how OKCupid uses it’s data.  Yes, they mine their data for information that will help them make the site more successful (and by extension, profitable) but they also share their information on their blog.  Oktrends is a veritable gold mine of information about our habits, preferences, and our often misguided assumption about what will appeal to a potential mate (or ourselves.)

What’s awesome about their data is that they interpret it for us–so when they share their thousands of data points about what first messages gained the most traction in their post, Exactly What to Say in a First Messagethey get very specific with advice–open with “how’s it goin” or “What’s up,” but not the more formal “Hi” or “Hello.”  Express interest by using the phrases, “I was curious,” or “You mentioned…”  Contrast that with the general advice we often get, like “be casual,” or “show specific interest in the other person.”

Imagine this transferred to other fields.  Let’s take the classroom.  We give feedback to students, but it’s often a general, not-easily-applied kind of message.  For example, “Johnny is not very engaged in his reading.  He needs to focus on his books for longer.”  We have a very general piece of data here–not engaged in reading–that’s sort of equivalent to OKCupid telling users, “you’re not successful at getting dates.”  It’s accurate, but it’s describing a problem, rather than being helpful.  In fact, it’s pretty discouraging to hear.

We could get more specific with our data.  “Johnny reads for an average of 5 minutes before he finds an alternate activity, like going to the drinking fountain or sharpening a pencil.”  But we’ve really just described the problem in more detail, like saying, “people look at your profile on OKCupid an average of 8 seconds before they click away.”

We need some data for when Johnny is successful at engaging in reading to see the difference–or barring that, some data about when other students similar to Johnny are successful.  “Johnny reads for an average of 15 minutes when his book is a series with characters he knows well,” or “Johnny focuses for more than ten minutes at a time when he’s sitting in a favorite spot, facing away from other students so he’s not distracted.”  Suddenly we have some strategies for how to help Johnny, like OKCupid telling us that pictures that show activities spark more meaningful conversations on average than selfies that just focus on a smiling face.

Now imagine we give this information to Johnny, instead of just sharing with parents at conferences or keeping the knowledge tucked in our head.

Johnny, I’ve been marking when you’re reading and when you’re doing a different activity, and I noticed something interesting.  Usually when you’re reading, you read for about 5 minutes before you get distracted.  But sometimes when you read, you can focus for ten or fifteen minutes at a time!  Usually that’s when you’re reading your series books, like Animorphs or The Lightening Thief.  What do you think of that?”

Johnny can make the cognitive leap.  And now he can devise some plans for how to stay engaged more in reading.

Data is a powerful tool for noticing trends, and what works and what doesn’t, but it’s often held by a those in charge.  OKCupid has opened up some of their data to benefit their subscribers, and they’ve made that data specific, comprehensible, and useful.  Too often in education the data is vague (such as “below standard in math” or “5 on the API”) or not shared with the ultimate actors–the students.  If we are really specific about the issue (struggles in reading because doesn’t notice when a vocabulary word is unknown) then we can be very specific about solutions (repeated lessons with short texts working on identifying and attacking unknown words.)

Data can help us to identify problems, but it can also help us to identify solutions.  We can share data with our students, in the form of grades, percentages, or smiley-faces, but the more specific we are with our observations, the more our students can respond with a positive solution to the problem.

 

Measurements Gone Wild–How Assessment Can Send You Down the Path of Darkness

30 Apr
Math Fact Test, by Judy Baxter, from Flickr

Math Fact Test, by Judy Baxter, from Flickr

Pardon me for the dramatic title, but I don’t think it’s overstated.  In America, we love data.  We love to track what we eat, count how many people visit our website, and test our kids endlessly.  We make important decisions from these numbers.  The scale is up?  Cut back on carbs for a while.  Did you see a spike in traffic?  Iterate on that last post.  Kids are doing poorly in math?  Time to shorten recess and add in weekly times-tests.

What happens the next week/month/year?  Too often it’s…nothing.  The scale is still up, the website hasn’t taken off, our kids are still stagnating in the bottom half of the world in math achievement.  Our response?  Cut out more carbs, eliminate recess altogether, etc, etc.

If the definition of insanity is doing the same thing over and over, expecting a different result, America is one crazy country.  We’ve been railing against the achievement gap and low reading proficiency rates for decades, and our response has been to do more of the same.  No Child Left Behind brought in a surge of standardized testing and accountability, some of which was enlightening.  Disaggregating data by income level, English level, and racial group highlighted the disparity in achievement, especially in schools that were considered “high performing” and then were revealed to be failing their neediest children.  Data and accountability  are important for any program.

But are we looking at the right data?  Eduardo Briceno, the cofounder of Mindset Works, began a recent post on assessment with this quote: Not everything that counts can be counted, and not everything that can be counted, counts.  That quote encapsulates the current data crisis neatly–we are counting what can be easily counted, not what really matters.

Briceno’s article goes on to discuss the idea that important elements of success–creativity, collaboration and communication skills, mindset–cannot be easily assessed, and thus aren’t.  It’s easy to know if a student can answer, “what’s 5 x 7” or “Henry the XIII led which church reform?” but it’s much harder to know if a child will persevere after hitting a setback.  It’s easy to tell if a child can correctly tell you what’s wrong with the sentence, “Elizabeth runned for the ball.” but harder to know if she could write a complex paragraph on her own.  The result?  We test what’s easy–multiplication facts, punctuation–but not what’s difficult.

There’s nothing wrong with wanting children to know how to multiply and punctuate.  Those are important skills.  But they’re low level skills, and if we stop there, we’re in real trouble once we leave the cocooned confines of the classroom.


Finland, long held up as a model of educational success, does things totally differently.  Finland engages a number of positive educational practices, like lengthy teacher preparation programs, small classes sizes, and equitable funding, but Finland also assesses totally differently.  While America has been ratcheting up the testing, Finland has been pulling back.  The results?  Finland leads achievement on the PISA, an international assessment of math, reading and science, whereas the US is stagnating below the mean.

top-ten-pisa-scores1Linda Darling-Hammond, a Professor of Education at Stanford University, shares how,

There are no external standardized tests used to rank students or schools in Finland, and most teacher feedback to students is in narrative form, emphasizing descriptions of their learning progress and areas for growth. …samples of students are evaluated on open-ended assessments at the end of the second and ninth grades to inform curriculum and school investments. The focus is on using information to drive learning and problem-solving, rather than punishment. NEAToday, October, 2010.

By using open-ended assessment and teacher feedback in narrative form, Finland is able to assess a much broader scope of skills than our standardized tests can handle.  The result?  Their teaching focuses on a broader, more complex set of skills.

In America, assessment is the tail that wags the dog.  Whatever we test, we teach.  We have to, with curriculum, school rankings, and salaries all tied to those assessments.  Testing what’s easy is cheap in the short term, but expensive in the long term as we emphasize low-level skills.  So let’s use those tests for good.  Let’s test what we really care about–discrete skills, like multiplication, but also complex thinking, problem solving, collaboration.  Let’s put our minds to work to come up with solutions for how tests can be comprehensive, but also economical, rather than coming up with more carrot and stick measures for schools.  Let’s measure what’s really important so that we can teach what matters.

CARDBOARD BOX OFFICE

A world of film, a house of stuff.

Literacy Changes Everything!

Teaching and Parenting as a Dedicated Reader and Writer

To Make a Prairie

A blog about reading, writing, teaching and the joys of a literate life

sunday cummins

Experience Nonfiction

Shanahan on Literacy

Literacy in Education

TWO WRITING TEACHERS

A meeting place for a world of reflective writers.

The Quick and the Ed

Literacy in Education

Shanker Blog

THE VOICE OF THE ALBERT SHANKER INSTITUTE

Free Technology for Teachers

Literacy in Education

chartchums

Smarter Charts from Marjorie Martinelli & Kristine Mraz

%d bloggers like this: