Difference between revisions of "Blending layers of CUMULATE's user model"

From PAWS Lab
Jump to: navigation, search
(Data)
(Procedures)
Line 63: Line 63:
  
 
=== Procedures ===
 
=== Procedures ===
 +
Out of 114 users who took the class 56 were chosen. The criteria was that the user had to cover at least 33% of problems (15/48) and at least 33% of examples (15/64). For each of the qualified users 11 user models were computed: with 0% to 100% of example browsing activity "blended" to the problem solving (by % here we mean numerical weighting not item filtering).
 +
 +
=== Metrics ===
 +
* [http://en.wikipedia.org/wiki/Accuracy Accuracy] was used to measure the resulting user model quality.
 +
* Number of "best blends" - number of models with 10% blend of example activity or more, for which the accuracy is the highest
 +
 +
=== Results ===

Revision as of 14:16, 28 June 2009

Imbox content.png

This page is under construction. More content will be added soon


This work is focused on mutual influence of heterogeneous user activity types (reading texts, viewing examples, and solving problems) on user's knowledge of some educational domain. Ordinarily CUMULATE's considers them separately by classifying them to modeling tiers or levels, each corresponding to a level of Bloom's taxonomy of intellectual behavior. Great part of this work is devoted on cross-tier user modeling. Namely, how user activity can (positively) influence on another/other layer(s).

Study 1

In this study we have inspected the influence of the comprehension layer of CUMULATE's user model (viewing examples) on the application layer (solving problems). Our approach was to create "blends" of problem-solving tier of the user model and weighted (from 0 to 1, with .1 step) example viewing tier Our hypotheses were that:

  • Using example browsing activity when modeling problem solving improves model accuracy
  • Different users benefit from different “blends” of user model levels

In addition we wanted to se if

  • There is a single optimal blend for all users, and. or
  • Classes of users benefitting from different model blends can be determined

Data

We were using previously collected student usage logs collected from 4 Database Management courses offered during Fall 2007 and Spring 2008 semesters at the University of Pittsburgh and Dublin City University. Each course had roughly the same structure, an identical set of 48 problems served by SQLKnoT system, and an identical set of 64 examples served by WebEx system. Basic usage statistics are given in the table below.

School Semester Level No. of users Avg. problem attempts Avg. example views Avg. distinct problems Avg. distinct examples
U. of Pitt Fall 2007 U* 27 156.40 189.00 29.96 32.07
U. of Pitt Fall 2007 G 20 61.70 104.70 29.95 29.10
U. of Pitt Spring 2008 U 15 26.94 46.65 16.35 10.29
DCU Spring 2008 U 52 81.68 257.25 22.82 38.63
  • U – undergraduate, G – graduate

Procedures

Out of 114 users who took the class 56 were chosen. The criteria was that the user had to cover at least 33% of problems (15/48) and at least 33% of examples (15/64). For each of the qualified users 11 user models were computed: with 0% to 100% of example browsing activity "blended" to the problem solving (by % here we mean numerical weighting not item filtering).

Metrics

  • Accuracy was used to measure the resulting user model quality.
  • Number of "best blends" - number of models with 10% blend of example activity or more, for which the accuracy is the highest

Results