First Year Composition Program: Assessment

Assessment

The First-Year Composition Program at NIU is actively engaged in various forms of assessment, with the goal of using our assessment results to improve the program. In 2002-2003, we developed our own program outcomes based on the Council of Writing Program Administrators' Outcomes Statement. To support our development of authentic, evidence-based program assessment, we became a member of the first cohort of the Inter/National Coalition for Electronic Portfolio Research in 2004, and have been improving our electronic portfolio pedagogy and assessments ever since. In 2014, we developed an assessment rubric that aligned with both with our outcomes statement and the Association of American Colleges & Universities VALUE rubrics.[I1] 

For more information about the history of eportfolios in NIU FYComp, see http://www.engl.niu.edu/mday/eport.html

 2017 Assessment presentation available here.


 [I1]http://www.aacu.org/value/rubrics

Electronic Assessment of Eportfolios: Quantitative Results

Description:

Immediately after the fall 2014 semester, we collected student eportfolios from a sample of over three-hundred ENGL 103 students, as determined by the Office of Assessment, in order to both assess our students according to our programmatic outcomes, but also to establish a longitudinal baseline of progress for students after their first semester of college. Sections with and without peer advocates were also compared in order to examine the success of the Peer Advocate Program. FYComp teachers convened for a series of half- or full-day reading sessions and scored the ePortfolios according to our newly revised programmatic rubric . Portfolios and readers were collected into groups to assure that teachers were not reading portfolios from their own classes, and each portfolio was scored by at least 2 different readers. Each scoring session began with a series of calibration sets where readers were asked to score portfolios and then results were discussed in the group. Group leaders had themselves read, scored and discussed the calibration sets previously. In addition, group leaders were given an administrative interface to the assessment engine that allowed them to see and display scores from their group, and in the case of the calibration sets, from other groups as well. It is our belief that this calibration is an essential portion of the assessment process both because it allows users to see and possibly adjust their scoring given a communal norm, and more importantly facilitates a discussion about why certain features of a text are rewarded or penalized.


Results