UGC Approved Journal no 63975(19)
New UGC Peer-Reviewed Rules

ISSN: 2349-5162 | ESTD Year : 2014
Volume 12 | Issue 9 | September 2025

JETIREXPLORE- Search Thousands of research papers



WhatsApp Contact
Click Here

Published in:

Volume 12 Issue 4
April-2025
eISSN: 2349-5162

UGC and ISSN approved 7.95 impact factor UGC Approved Journal no 63975

7.95 impact factor calculated by Google scholar

Unique Identifier

Published Paper ID:
JETIR2504A42


Registration ID:
560456

Page Number

k344-k355

Share This Article


Jetir RMS

Title

HARP: Human Answer Reconciliation Pipeline

Abstract

The increasing demand for scalable and consistent evaluation of open-ended responses in educational environments necessitates robust AI-assisted assessment systems. This paper introduces the Human Answer Reconciliation Pipeline (HARP), a modular and extensible framework designed to automate the grading of text-based student responses using Large Language Models (LLMs). Unlike traditional systems that rely solely on reference answers, HARP supports both reference-based and standalone answer evaluation modes, enabling flexible assessment across diverse pedagogical contexts. The system features a FastAPI-based backend with customizable scoring formats (e.g., 0–10, 0–5, percentage), batch evaluation capabilities, and caching for performance optimization. The Model Context Protocol (MCP) Server is central to HARP's architecture, which enables modular LLM integration, file-based request handling, and standardized evaluation flows. Additionally, HARP employs techniques such as Reference Answer Guided (RAG) evaluation and zero/few-shot prompting strategies for enhanced scoring accuracy and interpretability. Designed for deployment in academic institutions, e-learning platforms, and AI research settings, HARP bridges the gap between manual grading and intelligent automation, delivering transparent, adaptable, and pedagogically aligned assessments.

Key Words

Automated Answer Evaluation, Human Answer Reconciliation Pipeline (HARP), Large Language Models (LLM), FastAPI, Model Context Protocol (MCP), Educational Technology, RAG Evaluation, NLP in Education, Batch Scoring System, AI-based Grading.

Cite This Article

"HARP: Human Answer Reconciliation Pipeline ", International Journal of Emerging Technologies and Innovative Research (www.jetir.org), ISSN:2349-5162, Vol.12, Issue 4, page no.k344-k355, April-2025, Available :http://www.jetir.org/papers/JETIR2504A42.pdf

ISSN


2349-5162 | Impact Factor 7.95 Calculate by Google Scholar

An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 7.95 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator

Cite This Article

"HARP: Human Answer Reconciliation Pipeline ", International Journal of Emerging Technologies and Innovative Research (www.jetir.org | UGC and issn Approved), ISSN:2349-5162, Vol.12, Issue 4, page no. ppk344-k355, April-2025, Available at : http://www.jetir.org/papers/JETIR2504A42.pdf

Publication Details

Published Paper ID: JETIR2504A42
Registration ID: 560456
Published In: Volume 12 | Issue 4 | Year April-2025
DOI (Digital Object Identifier):
Page No: k344-k355
Country: Raipur, Chhattisgarh, India .
Area: Engineering
ISSN Number: 2349-5162
Publisher: IJ Publication


Preview This Article


Downlaod

Click here for Article Preview

Download PDF

Downloads

000108

Print This Page

Current Call For Paper

Jetir RMS