12

(continued from page 5 - What Matters?) meaningful. A value of .921 like was observed when comparing math scores and ELA scores basically indicates you’re looking at two measures of the same thing. Secondly, an r-squared analysis was conducted to see how much of the variation in test score rank was attributable to the characteristic being considered. For statistical purposes very small r-squared values can have value. The importance of a factor is made evident by the relative measure of its coefficient of correlation, r2. Math tests and ELA tests were considered separately because for some factors there are significant differences in how they are affected by the factor. The same analysis was done for the whole state population and the rural population. The summation is attached as an appendix and the individual tables are available upon request. Based on the results of the two layered analysis we can compare the efficacy of the eighteen factors. The Pearson’s r analysis shows us the extent to which schools performing well on our independent variable were similarly successful on their test performance. Values range from -.563, a strong correlation to -0.015 a weak correlation. Keep in mind it is the difference between the value and zero that indicates the strength of the correlation (i.e. an (r= -.500) correlation is stronger than an (r=.100)). Similarly, the outcomes of the r-squared analysis indicate how much of the variance in student proficiency can be explained by variations in the specific factor. The purpose behind the utilization of the r-squared is to show how much of the variation in the dependent variable (percent of students testing proficient district rank) can be explained by the variation in the independent variable. The values ranged from .000 to .317. Clearly some factors don’t matter much at all. The percentage of students eligible for special education services has no effect on the district’s “Stronger Together!” Page 12 performance on the state test. This may seem counterintuitive, but consider that all schools have special education programs mandated by federal law and effectively monitored by outside auditors. It makes sense that there is no meaningful correlation. On the other end of the spectrum Free and Reduced-price Meal Program eligibility has the highest correlation with test scores. The Pearson correlation shows a negative relationship. As the level of eligibility increases, test scores decrease. The r-squared value shows us that almost a third of the variation in test score rank can be explained by the variation in the FRL eligibility rank. This is also the point where rural schools are most different from the whole district population. The r-squared values for our This rural population are almost half what the whole population demonstrated. would imply that if the same analysis were performed for just the metropolitan school districts we would find that even more of their variance is attributable to FRL eligibility. So, we’ve confirmed what I’ve suspected for years. Very little matters more than the socioeconomic background of a district, but within the rural context you don’t have the homogeneity of backgrounds you have within the metropolitan areas. The varied nature of rural districts means you don’t have the extreme highs and lows. When all you look at is the test score to identify school quality this means rural districts will fall in the B, C, and D range with fewer A’s and F’s. This might seem disheartening to some since it can be taken to mean that all the efforts to make schools great again are overshadowed by the fatalistic reality that zip code really does matter. Two points of light to help us see past this dismal reality. First, there is the eighteenth factor of ‘Operational Efficiency’. Contrary to the supposition of the policy analyst, it does matter. It also matters the same for the rural district as the broad (continued on page 16)

13 Publizr Home


You need flash player to view this online publication