Difference between revisions of "CUMULATE user and domain adaptive user modeling"

From PAWS Lab
Jump to: navigation, search
Line 35: Line 35:
 
|27
 
|27
 
|4224
 
|4224
|
+
|156.44
|
+
|29.96
 
|-
 
|-
 
|Fall'07
 
|Fall'07
Line 44: Line 44:
 
|20
 
|20
 
|1233
 
|1233
|
+
|61.65
|
+
|29.95
 
|-
 
|-
 
|Spring'08
 
|Spring'08
Line 53: Line 53:
 
|15
 
|15
 
|458
 
|458
|
+
|26.94
|
+
|16.35
 
|-
 
|-
 
|Spring'08
 
|Spring'08
Line 62: Line 62:
 
|17
 
|17
 
|216
 
|216
|
+
|12.71
|
+
|6.59
 
|-
 
|-
 
|Spring'08
 
|Spring'08
Line 71: Line 71:
 
|18
 
|18
 
|142
 
|142
|
+
|7.89
|
+
|4.00
 
|-
 
|-
 
|Spring'08
 
|Spring'08
Line 80: Line 80:
 
|52
 
|52
 
|4574
 
|4574
|
+
|81.68
|
+
|22.82
 
|}
 
|}
  

Revision as of 19:32, 9 April 2009

Imbox content.png

This page is under construction. More content will be added soon


This stream of work is aimed at improving CUMULATE's legacy one-fits-all algorithm for modeling user's problem-solving activity and creating a context-sensitive user modeling algorithm adaptable/adaptive to individual users' cognitive abilities as well as to individual problem complexities.

A new parametrized user modeling algorithm has been devised. A set of studies is set up to evaluate the new algorithm as well as its adaptability/adaptivity.

Study 1

This study involves retrospective comparative evaluation of the CUMULATE's legacy and parametrized user modeling algorithms. The evaluation is done using usage logs collected from 6 Database Management courses offered during Fall 2007 and Spring 2008 semesters at the University of Pittsburgh, National College of Ireland, and Dublin City University. Each course had roughly the same structure and an identical set of problems served by SQLKnoT system.

Scenario

We were comparing legacy CUMULATE algorithm and 3 versions of parametrized algorithm. The versions differed in the parameters used for user modeling.

  • First, was an attempt to shadow the legacy algorithm by guessing the best parameters for modeling, without discriminating individual user and problem differences.
  • The second version, did not discriminate users/problems as well. However, the parameters were obtained by fitting the global user parameter and a global problem parameter signature and then using them in model.
  • The third version of the parametrized algorithm worked with a set of user specific parameters and problem specific parameter signatures for the modeling.

Procedures

Accuracy and SSE (sum of squared error) were used as the metrics of comparison. They were computed overall for each of the 6 semester logs and 4 versions of algorithms.

In the case of legacy CUULATE and parametrized CUMULATE-best-guess algorithms, the data was taken as it. The globally parametrized, and individually paramatrized CUMULATE algorithms were supplied with the pre-fit global/individual user/problem-specific parameters. The data of only one of the early courses was used to obtain the parameters. Data of all 6 was used to compute parametrized models. Refer to the table below for details and basic log statistics statistics.

Semester School Level Procedures Users Datapoints Attempts per user Problems per user
Fall'07 Pitt Und. 27 4224 156.44 29.96
Fall'07 Pitt Grad. 20 1233 61.65 29.95
Spring'08 Pitt Und. 15 458 26.94 16.35
Spring'08 NCI Und. 17 216 12.71 6.59
Spring'08 NCI Und. 18 142 7.89 4.00
Spring'08 DCU Und. 52 4574 81.68 22.82

Results

Publication

References

Contacts

Michael V. Yudelson