Repository logo

Modeling Human-AI Cognitive Alignment on Protected Data

Loading...
Thumbnail ImageThumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Creative Commons

Attribution-NonCommercial-NoDerivatives 4.0 International

Abstract

This study explores the quantifying of cognitive alignment between human expert reasoning and Large Language Model (LLM) generated solutions, in protected sensitive data environments, through Research Data Management (RDM) practices that are crucial to trustworthy AI systems. Using economic risk assessments as our data domain, we propose a novel approach that leverages oneAPI's unified computing capabilities to process and synthesize sensitive data, while maintaining privacy, to establish a performance baseline for human-centered Artificial Intelligence (AI). Our preliminary study analyzes 10 economic cases, first by modeling the topics with Latent Dirichlet Allocation (LDA) and human analysis, and then by comparing patterns with the LLM generated insights using accelerated topic modeling. The methodology introduces a four-tier privacy preservation metric that quantifies information exposure rates, entity detection, and topic-level abstraction. Initial results demonstrate a 82% topic alignment between human-AI reasoning patterns, while maintaining a privacy preservation of 84% on our proposed scale. The oneAPI implementation shows promising results in handling unified computer-intensive privacy-preserving transformations. This research contributes to the field of privacy-aware AI-human collaboration in sensitive data domains, where reasoning alignment and data protection are crucial.

Description

Keywords

LDA, LLM, Cognitive alignment, oneAPI, topic modeling, human-AI collaboration, RDM

Citation

Related Materials

Alternate Version