This is the blog section of the PURE Reform website. Please leave your thoughts and comments here.
PURE Reform has created this blog as a forum for parents, teachers and community members to share information and voice concerns regrading the reform process in the Pittsburgh Public Schools. Although we would like to foster constructive dialogue, PURE Reform does not edit content. The views expressed by bloggers in this forum are not necessarily views held by PURE Reform.
To comment on an existing topic, go to the line at the bottom of the post for that topic that begins "Posted by..." That line will list "1 comment," "2 comments," etc. Click on "comments," then leave your comment in the box provided. To post as Anonymous, no registration is required, OR you can choose an identity.
To suggest a new topic, go to this month's post labeled "Start a New Post" and add your comment (as described above) about the new suggested topic. PURE Reform will use these comments to start new posts.
Ebony Pugh, spokesperson for the PPS made the following comment in regard to the marked difference in results when looking at PPS PSSA scores by following the same grade over 4 years.
"It's very difficult to look at a group of kids across the years when it's not necessarily the same kids taking the test each year," Pugh said. "The state and the federal government recommend that we look at our data by grade levels, and that's what we do."
Although I'm sure that there is a certain level of turnover in student population that results in slight differences of say, 3rd grade to 4th grade etc. I do not believe that the turnover could be large enough of either high performing students or low performing students to make this evaluation invalid. In other words, there would have to be a certain number, say 10%, of only high performing students leaving the district or only low performing students leaving the district to skew these results. Logic dictates that the number of students leaving is a mix of both high and low and mid-level achieving students. This could easily be verified by using the demographic data that the district already has, e.g. the number of children on free/reduced lunch from year to year.
Furthermore, the argument that the results would not be relevant because you couldn't guarantee that you would be looking at the same group of students each year is puzzling since by following one grade over 4 years you might have 10% difference in the population, but by comparing 3rd grade one year, then a different 3rd grade the next you are looking at a 100% difference in population.
At last week's Education Committee meeting, when Director McCrea requested longitudinal information, no one suggested that this was an inappropriate or irrelevant request; instead he was told that this information could be provided to him. Many administration members were present at the meeting.
The state's PVAAS system also looks at a cohort of students as they move through grade levels. The state's description of this method suggests that this longitudinal method mitigates the effect of migration in or out of a school. See:
As Kathy observed, a 10% or so change in the composition of a class is a lot less than the 100% yearly change in the composition of students at a particular grade level.
Also, I'm sure the district has the technology to run these numbers just for students who were in PPS for the entire 4 year period. If that data showed gains rathern than regression (for example because higher perforning students had disproportionately left the district) I'm sure we would hear about it.
"It's very difficult to look at a group of kids across the years when it's not necessarily the same kids taking the test each year," Pugh said.
Unless she's talking about kids who are held back, then, uh, as Kathy Fine points out, they are comparing different kids EVERY time they release scores. The only way to at least look at mostly the same kids is to do it longitudinally, at the district level. That at least evens out any variation that may be due to within district movement of students from school to school.
"The state and the federal government recommend that we look at our data by grade levels, and that's what we do."
Uhh, well, sort of. Actually, the feds don't recommend, they require! That is, the formulas for AYP do look at the same grade, different kids, year after year.
But, as Annette Werner points out, the state has *added* the concept of PVAAS, which looks at changes in student performance during a year, with expectations based on past performance. Obviously, this is seen as a more useful measurement in terms of kids and their scores, rather than an overall score for the district.
For a "data-driven" district, that's a pretty poor explanation for all of this.
5 comments:
Ebony Pugh, spokesperson for the PPS made the following comment in regard to the marked difference in results when looking at PPS PSSA scores by following the same grade over 4 years.
"It's very difficult to look at a group of kids across the years when it's not necessarily the same kids taking the test each year," Pugh said. "The state and the federal government recommend that we look at our data by grade levels, and that's what we do."
Although I'm sure that there is a certain level of turnover in student population that results in slight differences of say, 3rd grade to 4th grade etc. I do not believe that the turnover could be large enough of either high performing students or low performing students to make this evaluation invalid. In other words, there would have to be a certain number, say 10%, of only high performing students leaving the district or only low performing students leaving the district to skew these results. Logic dictates that the number of students leaving is a mix of both high and low and mid-level achieving students. This could easily be verified by using the demographic data that the district already has, e.g. the number of children on free/reduced lunch from year to year.
Furthermore, the argument that the results would not be relevant because you couldn't guarantee that you would be looking at the same group of students each year is puzzling since by following one grade over 4 years you might have 10% difference in the population, but by comparing 3rd grade one year, then a different 3rd grade the next you are looking at a 100% difference in population.
At last week's Education Committee meeting, when Director McCrea requested longitudinal information, no one suggested that this was an inappropriate or irrelevant request; instead he was told that this information could be provided to him. Many administration members were present at the meeting.
The state's PVAAS system also looks at a cohort of students as they move through grade levels. The state's description of this method suggests that this longitudinal method mitigates the effect of migration in or out of a school. See:
http://www.pdeinfo.state.pa.us/a_and_t/lib/a_and_t/Overview_StateImpl110106.pdf
As Kathy observed, a 10% or so change in the composition of a class is a lot less than the 100% yearly change in the composition of students at a particular grade level.
Also, I'm sure the district has the technology to run these numbers just for students who were in PPS for the entire 4 year period. If that data showed gains rathern than regression (for example because higher perforning students had disproportionately left the district) I'm sure we would hear about it.
"It's very difficult to look at a group of kids across the years when it's not necessarily the same kids taking the test each year," Pugh said.
Unless she's talking about kids who are held back, then, uh, as Kathy Fine points out, they are comparing different kids EVERY time they release scores. The only way to at least look at mostly the same kids is to do it longitudinally, at the district level. That at least evens out any variation that may be due to within district movement of students from school to school.
"The state and the federal government recommend that we look at our data by grade levels, and that's what we do."
Uhh, well, sort of. Actually, the feds don't recommend, they require! That is, the formulas for AYP do look at the same grade, different kids, year after year.
But, as Annette Werner points out, the state has *added* the concept of PVAAS, which looks at changes in student performance during a year, with expectations based on past performance. Obviously, this is seen as a more useful measurement in terms of kids and their scores, rather than an overall score for the district.
For a "data-driven" district, that's a pretty poor explanation for all of this.
Well, they have to say SOMETHING other than "no comment"...
Post a Comment