There is a big push from accrediting bodies across all fields that want to see how organizations are giving feedback to students to promote learning and using that same feedback to improve their own performance. For example, NCATE requires that programs have an "assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the performance of candidates, the unit, and its programs." This means that teacher education programs must be in the business of Organizational Learning (OL), which can be defined as the "detection and correction of error" (Argyris & Schon, 1978). Even if you aren't a NCATE program, this is a pretty good goal. In the attempt to meet the letter of the law we need to be careful to avoid the creation of parallel processes simply for compliance as they can be a distraction from the day-to-day, semester-to-semester work that is done in our core tasks of educating students and advancing knowledge.
In keeping with the spirit of accreditation requirements, we do need systems that generate feedback to students, faculty and programs. These systems must be designed so that the feedback generated can be authentically used in the workflow of the unit without taxing the human capital (i.e. help people do work that matters without killing them). A driving philosophy for the Open Portfolio project is that using the tools does not have to be additive, but ideally captures, refines and makes explicit what is already happening at a tacit level in a program.
This primer is intended to serve as an overview of the approach and architecture of the Open Portfolio platform used by the UK College of Education (and a few other places) to facilitate both candidate and program assessment. One thing that needs to be stated up front is that an eportfolio IS NOT a roomba. You know those little vacuum cleaners that you turn on and let them roam around the house. Well it turns out that they don't do a great job at cleaning your house. I think the roomba is a good metaphor for portfolio systems. They aren't turnkey solutions that run on their own. They require all members of the school to participate. They must be integrated into the fabric of how learning is done to get the most out of them.
Originally the Open Portfolio project began (early 2000's) as an attempt to see if we could move from a file-cabinet based data infrastructure to a digital one using open source technology. Since then, the project has evolved to working with programs to collect standards-based, "high-resolution" artifacts from learners. These atomic level data can then be combined to form "molecular" snapshots of an individual student's progress over time. Going a step further is to repackage that data into "complex compounds" that look at multiple students at a specific point or across time to generate program level feedback. I refer to this as the chemistry of good eportfolios. Your portfolio is only as good as the most basic component.
Student products are at the heart of the electronic portfolio process. Digitizing and cataloging the work provides the evidence necessary for assessment at the individual level or at the program level. Learners are required to select evidence of their performance and link the evidence to standards by describing the artifact and justifying how they have met the standard (see Figure 1). Some programs specify artifacts and the standards they meet, but the idea is still the same. Experts score the learner selections and may leave written feedback as well (see Figure 1). The individual artifact data can then be combined to create feedback at the individual or program level (See figure 2). Scores from artifacts can be aggregated by any number of factors (e.g., learner, year or standard). Within seconds, faculty can generate "real time" data displays during an academic year or for multi-year reflection. There is enough flexibility in the structure to reflect the particular needs and approaches of the various programs, but there is a structure to the platform that programs should consider in order to maximize the use of the tool. The functionality will continue to evolve as more people begin using it, so please make suggestions. Click any of the images below to expand them.
I've included another example of an artifact as well as some graphics that outline the design assumptions that are currently driving the development of open portfolio.
Learners must show a certain level of competence in a standard set (e.g., candidate meets the required level of attainment with the College of Education Technology Standards). Any decision of Target Met / Target Not Met represented in the Official College Database (CEPIS) must reflect a body of accessible evidence. The OTIS portfolio system provides a mechanism for housing this evidence and facilitating the collection, evaluation and synthesis of evidence to facilitate the continuous assessment process. What follows are a few basic steps for getting the most out of the electronic portfolio process. A goal of this is to get away from faculty compliance, but to create a rich process that becomes an integral part of how a program operates.
Other sources of evidence, such as observations during can contribute to the overall body of evidence. These can also be captured within the Open Portfolio environment as well. Figure 7 shows an example of the reporting dashboard for performance observations/evaluation. These can be anything from a classroom observation to a group of audience members giving feedback on a musical performance or art show. Click on the image to enlarge.
Below is another representation of the possible data collected through Open Portfolio during a student's time in the program.
STEP 1: Pick your scale
There are two options people tend to go with here. The first is a relative scale according to stage. For example, student A is performing at an exemplary level in area X for the mid-point review. Another method is to have an absolute continuum of performance that you are moving students along. OTIS uses the absolute scale because you can better capture growth. if a student is operating at the exit level during their sophomore year, that can be reflected in the scoring. From a technology standpoint, the reporting options are more robust for creating differentiated performance profiles. For classes we often need to give relative grades, especially if they are taken in a developmental sequence. A proficient artifact from methods I wouldn't be of the same quality as a proficient artifact from methods II, but for the sake of assigning a class grade we need to take into the expected outcome for the particular course/stage in the program. The default scale is shown below in figure 3. Programs can chose a 3, 4 or 5 point scale as well as customize the descriptors. Whatever the scale, a key activity is developing a consistent interpretation of what the indicators mean and making it ok not to inflate scores. Otherwise you end up with a massive clump of uniform data that no one buys as accurate or valuable (which brings us back to compliance).
Figure 3. Example of scoring scale.
Here's one of my favorite examples from one of the programs I work with. They have defined their scores in terms of what is the professional capacity of a candidate.
Defining an absolute progression is the first step, but more important is continuing the discussion to develop program level criteria that describes the charachterisctics of the evidence that a student would provide to earn recognition for a certain level of competence in a domain (e.g. planning, technology use, etc). Here is a great example from a K12 Language Arts program.
STEP 2: Pick your "cut score"
Basically, you are telling students where they need to get to by the various points in the program. Of course, this will be shaped by what sort of scale you have chosen. Of course as Captain Jack would say, "They're more like guidelines." The next phase of development for the OTIS portfolio is to have this schema built in, so that as students accumulate evidence they will have more ownership about how they are progressing. Table 1 attempts to give a sense what a program "cut score" looks like, showing what evidence is required at which level, how many instances (the number) and at what level (color). At this point it is probably important to emphasize that this is meant to be a basic guide that would indicate a student is an acceptable candidate (i.e. safe to put in a school). The top candidates in any program will have a profile that far exceeds the "cut score". Why are some of the standards blank you ask? The intent is to demonstrate grasp of the set, which might mean that students don't have to check off every element. When you stack all the individual profiles on top of each other you will show coverage at the program level of analysis.
Example of standards of evidence for candidate matriculation
STEP 3: Where does evidence come from and when does it get evaluated
There are a tremendous number of options here. Most work generated for portfolios occurs during course work. Do you want people to score work in OTIS as it occurs in the various courses and then let students select which ones to include for the portfolio review? Alternatively, a program may ask students to assemble a portfolio and then score it together during a group review. At the end of the process a student should have a sample of work that can be juxtaposed against the "cut score" to determine their standing in the program, which is recorded during CAR process in CEPIS. I think the proximity of feedback to the creation of the artifact is more powerful and engages the student in the continuously constructing their professional identity. A student recently included a piece of work from one of my courses in their exit portfolio. A student may include a standard that I may not feel comfortable rating, so I might leave that unscored and make a note of that in my comments. If a student includes that piece in their portfolio, that item might be addressed at the portfolio review. At this moment some of you are thinking that this won't work for scenario A or faculty B, and you are probably right. The workflow of this process is a difficult, but solvable, part of the e-portfolio process. Jettisoning some existing practices may be a part of this process and may take a year or more to find a suitable "equilibrium state" for your program. If it is to be meaningful and not compliant busy work, this process requires faculty and programs coordinate and communicate around candidate performance (which is not such a bad thing).
I would encourage programs to limit the number the number of artifacts that will be accepted for a portfolio review. A good artifact (comprised of student product, rationale and reflection) can represent achievement in multiple areas (e.g. technology use, managing instruction and content knowledge could all show up in the same lesson plan). If a program limits students to 5 artifacts for a review, with each artifact being matched with up to four standards/outcomes, that allows potentially 20 standards to be addressed (60 opportunities over the course of a program). Most initial preparation programs have between 21 and 35 standards to cover. Five isn't a magic number, just an example of how to make the process more manageable for faculty. Rather than surviving the scoring process, reviewers can focus on giving good feedback and honest scores. Honest scoring is the lynchpin of making the data meaningful in the the various reports the system generates.
One such report allows the program faculty to look at student performance relative to program level targets (see table 1 for example). In the following figure we see a report that illustrates the mechanism for doing this. At each of the 3 checkpoints there are quality targets set for student work. For example at the first check point most of the focus is on the ability of the student to use various mediums and their capabilities to produce and analyze art. The target level for this first review point is level 2 (a.k.a. ready for advanced study). This report pulls every artifact that has been tagged for inclusion in the entry portfolio and looks at the scores given to the learning outcomes. The percentage displayed shows the number of artifacts that were at or above the target level for that review point. A program could have a percentage cut-off (e.g. 75%) that if it drops below that outcome becomes an action item for the program as it is deemed a programmatic issue rather than an issue to be addressed with individual students.
Another example of a program level report is similar to the individual standard matrix. In the example below we see the agregate scores for an entire program. This can be filtered by cohort and/or course. By clicking on a cell the program can pull up every artifact connected with an outcome that allows faculty to examine the original student work, revisit the scores each faculty member assigned and have evidence grounded discussions of changes to the program.
In addition to the person by item matrix presented earlier, programs can also generate score summaries by the type of user completing the form (e.g. how do students rate themselves vs. faculty vs. clinical partners.
These reports allow program faculty to have both real-time and reflective conversations about the performance of candidates and the program. faculty can also pull up "signature assignments" and look for patterns in performance.