
I’ve finished marking the first summative assessment I am involved with in educ9404. Whilst still on my mission to be responsive to learner need, it’s important to be able to look at the data these assessments show me and consider whether my changes to the teaching program have had any effect. Usually, in the high school setting, I’d measure my success against predicted grades. In this instance it’s very hard to evaluate how well my student has done against prior performance indicators. This is purely because I don’t have access to that information – there is no expectation that I should be using it to measure against. Data driven analysis is therefore a little difficult. I don’t even have access to their average GPA score.
What I do know, is that the changes I made this year were based upon my experiences last year – teaching the same course. So, if we look at the data between last year and this year’s cohort, what do we see?
2016:
This cohort had the same assignment. However, they did not have the same assessment rubric and we did not use OneNote as a means to communicate and learn. The rubric last year wasn’t aligned to the Australian Professional Standards and had no guiding content for each level.
See Last Year’s Rubric: Assignment 2 Feedback 2016
2017:
I started my experiment with OneNote. Learning as I went. I slowly got to grips with the ways in which OneNote might help me to learn about my students’s needs and respond to them accordingly. Read more about that here. Inspired by a PD session by the Professional Experience team, I also re-wrote the rubrics to follow a similar convention to the one used on Prac. My aim was to teach students to ‘Heat Map’ their rubric and focus their understanding about what was being assessed and how.
See this Year’s Rubric:Assignment 2 Rubric 2017
With these two data sets I can see a marginal improvement in the percentage rate of High Distinctions (from 48 – 51%). This year, of the two who failed, one mis-understood (and that will soon convert to a pass) the task and the other passed well but handed in the assignment so late they lowered their grade down to a fail. (The SAM (statement of Assessment Methods) states that each day – or part thereof – that an assignment is late I have to remove 5%). Last year the two fails were representative of two students who were unable to complete the course and did not submit an assignment at all.
From these data sets, it is difficult to see that the changes I have made to the program have had a very strong impact on the cohort. Of course, it is encouraging to see that there were proportionately more HDs than last year. Perhaps the most encouraging trend is the apparent increase of students moving from Pass to Credit this year. Could it be that my interventions and pedagogical decisions have had the most impact at this band of ability?
TARGET: Next year, I need to find out where I can get data on my students. I’d like to start with at least the following information:
- What subject they specialise in
- What their average GPA score is
- What their predicted graduation grade/score is
I’d also love information like:
- Where have they had the opportunity to hit these focus areas before ( a long term map of the whole course should show me)
- What mark did they get in those assignments which hit the same FAs?
- What were their strengths and weaknesses in those areas?
- What are their strengths and weaknesses across the whole course so far?
I wonder where I find that information without coming out and asking them for it myself. Assessment data analysis, in my experience, has always relied on context and I am struggling to find one that meets my needs here. I am able to make some judgements based on commonalities I saw between students across the cohort however.
The following features came out strongly.
- They must be completing a differentiation topic at the same time. This came out very strongly in their answers and it was lovely to see them being able to use readings from that topic to back up their pedagogical choices in mine. It’s worth noting however, that I told them to open up their thinking and try and connect the dots. In effect, I gave them permission to do this. It would be wonderful to talk to the topic coordinator of this topic about we might support each other a little.
- These students thought deeply about the cohort they had invented and there were some amazing examples of PCK. They are most definitely showing signs that they will be able to respond to learner need and, i hope, that they understand why that is so important. I sincerely hope that this leads to some adventurous planning and learning for their students next year
- Some students found it hard to pick one activity and focus on it. Instead a fair few spoke about several activities in one lesson. This was not what the task called for, but there were so many that did this. I wonder why? Is it too hard for them to focus on one activity without experiencing the whole lesson? Should this influence the design of this assessment next year?
- The word limit needs to be enforced far more strongly. The longest essay I received had over 7,000 words (it was a 2000 word essay.) There was a lot of table usage with some idea that words in tables don’t count??? I will make sure that they DO next year. Marking such long essays almost killed me and it didn’t actually result in higher marks. 2200 words and the pen goes down.. tables or no tables
- The most successful students were able to stay focused and write only what the rubric demanded. Some students were just writing as much as they could remember from their workshops… Almost like they felt that everything I’d taught them had to be demonstrated in this assignment. That’s not the case.. Part of the challenge is to select what is necessary and save the rest for next year… Perhaps I need to make that more explicit to them?
- A focus on one tool with discussion of it’s features and how they may be used to benefit students with varying needs was also the most successful way to complete this assignment. – Could this be something that I more specifically prescribe. I.e. instead of asking them to look at one activity… I ask them to look at one tool and explore it’s options? It’s effectively the same task, but I wonder if this wording will help them to focus more effectively?
- The movement of the Gen Cap workshop, to make room for another on in prep for this topic may have meant that far fewer students felt able to connect the Gen Cap to the activity they were describing. If the Gen cap is assessed in assignment 3… do I need in assignment 2? Just as if 2.6 is assessed heavily in assignment 2, do I need it in assignment 3?
As much as students love hate completing questionnaires. I am tempted to see if any of mine might be willing to answer some questions in relation to the questions shown here in bold.
I certainly have something to think about when it comes to prepping for this again next year. Sometimes, I wish that I could invite some of my collaborators to do the course so I could get their honest opinion! It would be great to get some feedback.
With my teacher hat on… (not my CEO one.. i promise) I wonder how this might all be different if each of my students had been keeping an Edufolio for the year. What data that Edufolio would show me. With a simple dashboard I could have a much better understanding of my students interests, needs and strengths by just looking at their tag cloud.. that’s even before we ask the database some simple questions… Maybe one day?