I've always wondered what it does to our emotional well being when having to resort to picking up a book about "juggling for dummies" or "the complete idiot's guide to javascript". Why not provide people with some information that helps them AND boosts their self esteem, hence the title of this document. OTIS Online is an ongoing effort to develop and share tools that make learning a more powerful experience. OTIS projects are usually built with free opensource software and licensed under a General Public License (GPL). Open Portfolio is one of the flagship applications for this project.

There is a big push from accrediting bodies across all fields that want to see how organizations are giving feedback to students to promote learning and using that same feedback to improve their own performance. For example, NCATE requires that programs have an "assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the performance of candidates, the unit, and its programs." This means that teacher education programs must be in the business of Organizational Learning (OL), which can be defined as the "detection and correction of error" (Argyris & Schon, 1978). Even if you aren't a NCATE program, this is a pretty good goal. In the attempt to meet the letter of the law we need to be careful to avoid the creation of parallel processes simply for compliance as they can be a distraction from the day-to-day, semester-to-semester work that is done in our core tasks of educating students and advancing knowledge.

In keeping with the spirit of accreditation requirements, we do need systems that generate feedback to students, faculty and programs. These systems must be designed so that the feedback generated can be authentically used in the workflow of the unit without taxing the human capital (i.e. help people do work that matters without killing them). A driving philosophy for the Open Portfolio project is that using the tools does not have to be additive, but ideally captures, refines and makes explicit what is already happening at a tacit level in a program.

This primer is intended to serve as an overview of the approach and architecture of the Open Portfolio platform used by the UK College of Education (and a few other places) to facilitate both candidate and program assessment. One thing that needs to be stated up front is that an eportfolio IS NOT a roomba. You know those little vacuum cleaners that you turn on and let them roam around the house. Well it turns out that they don't do a great job at cleaning your house. I think the roomba is a good metaphor for portfolio systems. They aren't turnkey solutions that run on their own. They require all members of the school to participate. They must be integrated into the fabric of how learning is done to get the most out of them.

Originally the Open Portfolio project began (early 2000's) as an attempt to see if we could move from a file-cabinet based data infrastructure to a digital one using open source technology. Since then, the project has evolved to working with programs to collect standards-based, "high-resolution" artifacts from learners. These atomic level data can then be combined to form "molecular" snapshots of an individual student's progress over time. Going a step further is to repackage that data into "complex compounds" that look at multiple students at a specific point or across time to generate program level feedback. I refer to this as the chemistry of good eportfolios. Your portfolio is only as good as the most basic component.

Student products are at the heart of the electronic portfolio process. Digitizing and cataloging the work provides the evidence necessary for assessment at the individual level or at the program level. Learners are required to select evidence of their performance and link the evidence to standards by describing the artifact and justifying how they have met the standard (see Figure 1). Some programs specify artifacts and the standards they meet, but the idea is still the same. Experts score the learner selections and may leave written feedback as well (see Figure 1). The individual artifact data can then be combined to create feedback at the individual or program level (See figure 2). Scores from artifacts can be aggregated by any number of factors (e.g., learner, year or standard). Within seconds, faculty can generate "real time" data displays during an academic year or for multi-year reflection. There is enough flexibility in the structure to reflect the particular needs and approaches of the various programs, but there is a structure to the platform that programs should consider in order to maximize the use of the tool. The functionality will continue to evolve as more people begin using it, so please make suggestions. Click any of the images below to expand them.

Figure 1: Artifact Figure 2: Sample portfolio w/ performance matrix Figure 3: Sample Performance Matricies

I've included another example of an artifact as well as some graphics that outline the design assumptions that are currently driving the development of open portfolio.

Figure 4: Portfolio Purpose Figure 5: Anatomy of an artifact

Learners must show a certain level of competence in a standard set (e.g., candidate meets the required level of attainment with the College of Education Technology Standards). Any decision of Target Met / Target Not Met represented in the Official College Database (CEPIS) must reflect a body of accessible evidence. The OTIS portfolio system provides a mechanism for housing this evidence and facilitating the collection, evaluation and synthesis of evidence to facilitate the continuous assessment process. What follows are a few basic steps for getting the most out of the electronic portfolio process. A goal of this is to get away from faculty compliance, but to create a rich process that becomes an integral part of how a program operates.

Other sources of evidence, such as observations during can contribute to the overall body of evidence. These can also be captured within the Open Portfolio environment as well. Figure 7 shows an example of the reporting dashboard for performance observations/evaluation. These can be anything from a classroom observation to a group of audience members giving feedback on a musical performance or art show. Click on the image to enlarge.

A bunch of these......

Can give you this.........

Figure 7: Collecting other forms of evidence (observations, site visits, presentation juries, etc.)

 


Below is another representation of the possible data collected through Open Portfolio during a student's time in the program.

STEP 1: Pick your scale

There are two options people tend to go with here. The first is a relative scale according to stage. For example, student A is performing at an exemplary level in area X for the mid-point review. Another method is to have an absolute continuum of performance that you are moving students along. OTIS uses the absolute scale because you can better capture growth. if a student is operating at the exit level during their sophomore year, that can be reflected in the scoring. From a technology standpoint, the reporting options are more robust for creating differentiated performance profiles. For classes we often need to give relative grades, especially if they are taken in a developmental sequence. A proficient artifact from methods I wouldn't be of the same quality as a proficient artifact from methods II, but for the sake of assigning a class grade we need to take into the expected outcome for the particular course/stage in the program. The default scale is shown below in figure 3. Programs can chose a 3, 4 or 5 point scale as well as customize the descriptors. Whatever the scale, a key activity is developing a consistent interpretation of what the indicators mean and making it ok not to inflate scores. Otherwise you end up with a massive clump of uniform data that no one buys as accurate or valuable (which brings us back to compliance).

Figure 3. Example of scoring scale.

Here's one of my favorite examples from one of the programs I work with. They have defined their scores in terms of what is the professional capacity of a candidate.

1 - Beginning Graduate Clinician

Demonstrates competency, resourcefulness and creativity rarely or seldom, and or requires considerable guidance and assistance in the execution of this specific clinical skill, regardless of the disorder. The graduate clinician contributes little to the learning process.

The graduate clinician could visit with a person who presents with a communication impairment.

2 - Developing Graduate Clinician

Demonstrates inconsistent competency, independence, resourcefulness and creativity in the execution of this specific clinical skill across disorders; level of competency displayed is disorder specific. The graduate clinician requires regular direction and feedback contributing some to the learning process.

The graduate clinician could function as a one-to-one aide for a person with a communication impairment.

3 - Advanced Graduate Clinician

Demonstrates general competency and consistency across disorders but is still developing independence, resourcefulness and creativity in the execution of this specific clinical skill. The degree of direction and feedback required is dictated by the disorder with the graduate clinician contributing much to the learning process.

The graduate clinician could function as a speech-language pathology assistant.

4 - Beginning Professional

Demonstrates adequate competency, consistency and independence in the execution of the clinical skill regardless of disorder but lacks resourcefulness and creativity. The graduate clinician requires periodic guidance and feedback about the execution of this specific clinical skill with low incidence disorders requiring more guidance. The graduate clinician directs his/her learning, remediating areas of need.

The graduate clinician could effectively manage a caseload while benefiting from mentoring.

5 - Advanced Beginning Professional

Demonstrates competency, consistency, creativity, resourcefulness and independence; requiring minimal to no feedback. Implementation of the clinical skill is above that expected of a beginning clinician. The graduate clinician?s learning is influenced by other disciplines and areas of study.

The graduate clinician could mentor another beginning speech-language pathologist on this specific clinical skill.

Defining an absolute progression is the first step, but more important is continuing the discussion to develop program level criteria that describes the charachterisctics of the evidence that a student would provide to earn recognition for a certain level of competence in a domain (e.g. planning, technology use, etc). Here is a great example from a K12 Language Arts program.

 

STEP 2: Pick your "cut score"

Basically, you are telling students where they need to get to by the various points in the program. Of course, this will be shaped by what sort of scale you have chosen. Of course as Captain Jack would say, "They're more like guidelines." The next phase of development for the OTIS portfolio is to have this schema built in, so that as students accumulate evidence they will have more ownership about how they are progressing. Table 1 attempts to give a sense what a program "cut score" looks like, showing what evidence is required at which level, how many instances (the number) and at what level (color). At this point it is probably important to emphasize that this is meant to be a basic guide that would indicate a student is an acceptable candidate (i.e. safe to put in a school). The top candidates in any program will have a profile that far exceeds the "cut score". Why are some of the standards blank you ask? The intent is to demonstrate grasp of the set, which might mean that students don't have to check off every element. When you stack all the individual profiles on top of each other you will show coverage at the program level of analysis.

Table 1

Example of standards of evidence for candidate matriculation

 

 

Entry

Mid

Exit

COET1

 Integrates media and Technology into Instruction

 

1

1

COET2

 Utilizes Multiple Technology Applications to Student Learning

 

1

2

COET3

 Selects Appropriate Technology to Enhance Instruction

 

 

 

COET4

 Integrates Student Use of Technology into Instruction

 

1

2

COET5

 Addresses Special Learning Needs Through Technology and Media

 

1

1

COET6

 Promotes Ethical and Legal Use of Technology

1

1

1

FSD1

 Communicates Appropriately and Effectively

1

1

1

FSD2

 Demonstrate Constructive Attitudes

1

1

1

FSD3

 Demonstrates Ability to Conceptualize Key Subject Matter Ideas

1

2

3

FSD4

 Interacts Appropriately and Effectively with Diverse Groups

 

1

1

FSD5

 Demonstrates a Commitment to Professional Ethics and Behavior

1

1

1

KTS1

 Demonstrates Applied Content Knowledge

 

1

3

KTS10

 Provides leadership within the school, community and profession

 

 

 

KTS2

 Designs and Plans Instruction

1

1

3

KTS3

 Creates and maintains learning climate

 

1

2

KTS4

 Implements and manages instruction

 

1

2

KTS5

 Assesses and communicates learning results

 

1

2

KTS6

 Demonstrates the implementation of technology

 

 

 

KTS7

 Reflects and evaluates teaching and learning

 

1

3

KTS8

 Collaborates with colleagues, parents and teachers

 

1

2

KTS9

 Evaluates teaching and implements professional development

 

 

1

STEP 3: Where does evidence come from and when does it get evaluated

There are a tremendous number of options here. Most work generated for portfolios occurs during course work. Do you want people to score work in OTIS as it occurs in the various courses and then let students select which ones to include for the portfolio review? Alternatively, a program may ask students to assemble a portfolio and then score it together during a group review. At the end of the process a student should have a sample of work that can be juxtaposed against the "cut score" to determine their standing in the program, which is recorded during CAR process in CEPIS. I think the proximity of feedback to the creation of the artifact is more powerful and engages the student in the continuously constructing their professional identity. A student recently included a piece of work from one of my courses in their exit portfolio. A student may include a standard that I may not feel comfortable rating, so I might leave that unscored and make a note of that in my comments. If a student includes that piece in their portfolio, that item might be addressed at the portfolio review. At this moment some of you are thinking that this won't work for scenario A or faculty B, and you are probably right. The workflow of this process is a difficult, but solvable, part of the e-portfolio process. Jettisoning some existing practices may be a part of this process and may take a year or more to find a suitable "equilibrium state" for your program. If it is to be meaningful and not compliant busy work, this process requires faculty and programs coordinate and communicate around candidate performance (which is not such a bad thing).

I would encourage programs to limit the number the number of artifacts that will be accepted for a portfolio review. A good artifact (comprised of student product, rationale and reflection) can represent achievement in multiple areas (e.g. technology use, managing instruction and content knowledge could all show up in the same lesson plan). If a program limits students to 5 artifacts for a review, with each artifact being matched with up to four standards/outcomes, that allows potentially 20 standards to be addressed (60 opportunities over the course of a program). Most initial preparation programs have between 21 and 35 standards to cover. Five isn't a magic number, just an example of how to make the process more manageable for faculty. Rather than surviving the scoring process, reviewers can focus on giving good feedback and honest scores. Honest scoring is the lynchpin of making the data meaningful in the the various reports the system generates.

One such report allows the program faculty to look at student performance relative to program level targets (see table 1 for example). In the following figure we see a report that illustrates the mechanism for doing this. At each of the 3 checkpoints there are quality targets set for student work. For example at the first check point most of the focus is on the ability of the student to use various mediums and their capabilities to produce and analyze art. The target level for this first review point is level 2 (a.k.a. ready for advanced study). This report pulls every artifact that has been tagged for inclusion in the entry portfolio and looks at the scores given to the learning outcomes. The percentage displayed shows the number of artifacts that were at or above the target level for that review point. A program could have a percentage cut-off (e.g. 75%) that if it drops below that outcome becomes an action item for the program as it is deemed a programmatic issue rather than an issue to be addressed with individual students.

Another example of a program level report is similar to the individual standard matrix. In the example below we see the agregate scores for an entire program. This can be filtered by cohort and/or course. By clicking on a cell the program can pull up every artifact connected with an outcome that allows faculty to examine the original student work, revisit the scores each faculty member assigned and have evidence grounded discussions of changes to the program.

In addition to the person by item matrix presented earlier, programs can also generate score summaries by the type of user completing the form (e.g. how do students rate themselves vs. faculty vs. clinical partners.

These reports allow program faculty to have both real-time and reflective conversations about the performance of candidates and the program. faculty can also pull up "signature assignments" and look for patterns in performance.