Repository logo

Modeling Human-AI Cognitive Alignment on Protected Data

dc.contributor.authorDarveau, Vivianne
dc.contributor.authorDarveau, Peter
dc.date.accessioned2025-03-04T21:10:11Z
dc.date.available2025-03-04T21:10:11Z
dc.date.issued2025-02-14
dc.description.abstractThis study explores the quantifying of cognitive alignment between human expert reasoning and Large Language Model (LLM) generated solutions, in protected sensitive data environments, through Research Data Management (RDM) practices that are crucial to trustworthy AI systems. Using economic risk assessments as our data domain, we propose a novel approach that leverages oneAPI's unified computing capabilities to process and synthesize sensitive data, while maintaining privacy, to establish a performance baseline for human-centered Artificial Intelligence (AI). Our preliminary study analyzes 10 economic cases, first by modeling the topics with Latent Dirichlet Allocation (LDA) and human analysis, and then by comparing patterns with the LLM generated insights using accelerated topic modeling. The methodology introduces a four-tier privacy preservation metric that quantifies information exposure rates, entity detection, and topic-level abstraction. Initial results demonstrate a 82% topic alignment between human-AI reasoning patterns, while maintaining a privacy preservation of 84% on our proposed scale. The oneAPI implementation shows promising results in handling unified computer-intensive privacy-preserving transformations. This research contributes to the field of privacy-aware AI-human collaboration in sensitive data domains, where reasoning alignment and data protection are crucial.
dc.description.sponsorshipUniversity of Ottawa Research, Ottawa Canada Columbia University, New York NY USA
dc.identifier.urihttp://hdl.handle.net/10393/50229
dc.identifier.urihttps://doi.org/10.20381/ruor-30953
dc.language.isoen
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectLDA
dc.subjectLLM
dc.subjectCognitive alignment
dc.subjectoneAPI
dc.subjecttopic modeling
dc.subjecthuman-AI collaboration
dc.subjectRDM
dc.titleModeling Human-AI Cognitive Alignment on Protected Data
dc.typeWorking Paper

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail ImageThumbnail Image
Name:
Modeling Human-AI Cognitive Alignment on Protected Data.pdf
Size:
550.67 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail ImageThumbnail Image
Name:
license.txt
Size:
2.26 KB
Format:
Item-specific license agreed upon to submission
Description: