The idea of students learning “hard” and “soft” skills in school has gone viral among educators and policymakers in the past decade (see here and here). “Soft” skills refers to people skills of communication, sensitivity, and social awareness that permits students to collaborate with others and work smoothly inside and outside organizations. Here is one listing of such skills:
- Effective communication
- Critical thinking
- Willingness to learn
“Hard” skills refers to the technical proficiency children and youth acquire and use in different situations such as reading, writing, math, and operating electronic devices learned in and out of school. Measuring such skills has a long history of paper-and-pencil tests and real-life demonstration of skills and are readily available.
Now here is the segue I want to make from “hard” and “soft” skills to a conceptual level of determining school effectiveness by proposing “hard” and “soft” forms. I aim to expand the constricted definition of a “good” school that is judged effective now to one that has more to it than the familiar numbers used today.
This is the easy one to define. What measures policymakers, practitioners, donors, and parents use to judge a school today as “good,” “excellent,” “high performer,” “effective” or similar terms are easy to list:
*Standardized achievement test scores
*High school graduation rate
*Percentage of high school graduates who attend college
*Percentage of those going to college who receive a degree
Of course, there are other such quantifiable measures but the above are most often used to determine school (and district) effectiveness.
Whenever possible, these measures are used to compare with other schools in a district, state, and where applicable, the nation. Thus, rankings often appear in district and state reports to identify highest, mediocre, and lowest performers.
While there has been a raft of critiques of the reliance on test scores as the primary metric for determining school quality (see here and here), scores remain central to determining a particular school’s worth.
Now this is the tough sell to anyone convinced that the above measures are the best and only ones that should be used. For those who see the current crush of interest in social-emotional learning (SEL) as a sign of a cresting wave of change that will stretch “hard” effectiveness into a broader, more humane, and realistic purpose of schooling, pay attention (see here and here).
What I have noted in the excitement for SEL to become part of every teacher’s lesson is that the rationale–the selling of it–for its classroom presence is that SEL helps schools increase its reading and math scores and high school graduation rates while decreasing its suspensions of students. The Collaborative for Academic, Social, and Emotional Learning (CASEL) partners with many school districts to measure the results of social-emotional learning curricula on students, schools, and education. Recent studies were encouraging for a slew of reasons. The advocacy organization published the following outcomes to spread their message:
In the 19 school districts (serving 1.6 million students) that were measured—including Austin, Atlanta, Boston, Baltimore, Chicago, Cleveland, Denver, Minneapolis, Nashville, Oakland, Sacramento, and Tulsa—these were the high-level research findings from implementing SEL programs:
- Several districts saw improved reading and math scores in students.
- Several districts saw improved GPAs and higher test scores among students.
- Many districts had improved student behavior—higher graduation rates, better school attendance, fewer suspensions, and improved social-emotional competencies.
- Some school districts saw marked improvements in school climate.
Such findings remind me of an earlier, similar instance. Just as when the standards, testing, and accountability movement for math and reading moved into high gear in the decades before and after the turn of the 21st century, champions of art, music, and the humanities, fearful for a loss of students and funding for these subjects, rationalized their worth by connecting the study of these subjects as being associated with gains in test scores, high school graduation rates, etc. etc., etc. (see here and here).
This similarity is not a stale example of the false truism that history repeats itself. Implementing SEL because it helps traditional measures of quality reveals anew that “hard” effectiveness continues its dominance in judging school quality.
But, of course, it doesn’t have to stay that way. Consider the work of the MA Consortium for Innovative Educational Assessment (MCIEA) which importantly includes what are essential resources in blending both “hard” and “soft” effectiveness to determine a “good” school.
MCIEA’s School Quality Measures framework aims to describe the full measure of what makes a good school, using five major categories – the first three being essential inputs and the last two being key outcomes:
Whether such a plan that blends both “hard” and “soft” effectiveness metrics will spread beyond districts in Massachusetts, I cannot predict. But its existence now is, at the least, proof that such broader, more inclusive, measures of quality can be adopted and put into practice. And for that I am grateful.