Difference between revisions of "CUMULATE user and domain adaptive user modeling"

From PAWS Lab
Jump to: navigation, search
Line 15: Line 15:
  
 
=== Procedures ===
 
=== Procedures ===
[http://en.wikipedia.org/wiki/Accuracy Accuracy] and [http://en.wikipedia.org/wiki/Sum_of_squared_error SSE] (sum of squared errors) were used as the metrics of comparison. They were computed overall for each of the 6 semester logs and 4 versions of algorithms.
+
[http://en.wikipedia.org/wiki/Accuracy Accuracy] and [http://en.wikipedia.org/wiki/Sum_of_squared_error SSE] (sum of squared error) were used as the metrics of comparison. They were computed overall for each of the 6 semester logs and 4 versions of algorithms.
  
In the case of [[CUMULATE asymptotic knowledge assessment|legacy CUULATE]] and [[CUMULATE parametrized asymptotic knowledge assessment|parametrized CUMULATE]]-best-guess algorithms, the data was taken as it. The globally parametrized, and individually paramatrized CUMULATE algorithms were supplied with the pre-fit global/individual user/problem-specific parameters. The data of only one of the early courses was used to obtain the parameters. Data of all 6 was used to compute parametrized models. Refer to the table below for details.
+
In the case of [[CUMULATE asymptotic knowledge assessment|legacy CUULATE]] and [[CUMULATE parametrized asymptotic knowledge assessment|parametrized CUMULATE]]-best-guess algorithms, the data was taken as it. The globally parametrized, and individually paramatrized CUMULATE algorithms were supplied with the pre-fit global/individual user/problem-specific parameters. The data of only one of the early courses was used to obtain the parameters. Data of all 6 was used to compute parametrized models. Refer to the table below for details and basic log statistics statistics.
 +
 
 +
{|
 +
!Semester
 +
!School
 +
!Level
 +
!Procedures
 +
!Users
 +
!Datapoints
 +
!Attempts per user
 +
!Problems per user
 +
|}
  
 
=== Results ===
 
=== Results ===

Revision as of 19:24, 9 April 2009

Imbox content.png

This page is under construction. More content will be added soon


This stream of work is aimed at improving CUMULATE's legacy one-fits-all algorithm for modeling user's problem-solving activity and creating a context-sensitive user modeling algorithm adaptable/adaptive to individual users' cognitive abilities as well as to individual problem complexities.

A new parametrized user modeling algorithm has been devised. A set of studies is set up to evaluate the new algorithm as well as its adaptability/adaptivity.

Study 1

This study involves retrospective comparative evaluation of the CUMULATE's legacy and parametrized user modeling algorithms. The evaluation is done using usage logs collected from 6 Database Management courses offered during Fall 2007 and Spring 2008 semesters at the University of Pittsburgh, National College of Ireland, and Dublin City University. Each course had roughly the same structure and an identical set of problems served by SQLKnoT system.

Scenario

We were comparing legacy CUMULATE algorithm and 3 versions of parametrized algorithm. The versions differed in the parameters used for user modeling.

  • First, was an attempt to shadow the legacy algorithm by guessing the best parameters for modeling, without discriminating individual user and problem differences.
  • The second version, did not discriminate users/problems as well. However, the parameters were obtained by fitting the global user parameter and a global problem parameter signature and then using them in model.
  • The third version of the parametrized algorithm worked with a set of user specific parameters and problem specific parameter signatures for the modeling.

Procedures

Accuracy and SSE (sum of squared error) were used as the metrics of comparison. They were computed overall for each of the 6 semester logs and 4 versions of algorithms.

In the case of legacy CUULATE and parametrized CUMULATE-best-guess algorithms, the data was taken as it. The globally parametrized, and individually paramatrized CUMULATE algorithms were supplied with the pre-fit global/individual user/problem-specific parameters. The data of only one of the early courses was used to obtain the parameters. Data of all 6 was used to compute parametrized models. Refer to the table below for details and basic log statistics statistics.

Semester School Level Procedures Users Datapoints Attempts per user Problems per user

Results

Publication

References

Contacts

Michael V. Yudelson