Mohon tunggu...
Asep Setiawan
Asep Setiawan Mohon Tunggu... Membahasakan fantasi. Menulis untuk membentuk revolusi. Dedicated to the rebels.

Nalar, Nurani, Nyali. Curious, Critical, Rebellious. Mindset, Mindmap, Mindful

Selanjutnya

Tutup

Inovasi

Toward Interpretative Language Model: a CAS Framework with Six Interaction Variables to Capture Implicit Meaning

7 Juli 2025   16:49 Diperbarui: 7 Juli 2025   16:49 157
+
Laporkan Konten
Laporkan Akun
Kompasiana adalah platform blog. Konten ini menjadi tanggung jawab bloger dan tidak mewakili pandangan redaksi Kompas.
Lihat foto
Bagikan ide kreativitasmu dalam bentuk konten di Kompasiana | Sumber gambar: Freepik

Provide both branches with the same input prompts.
Collect model outputs for:
Phrase completion
Interpretation (instructed to "explain the meaning")
Contextual paraphrasing
Use human raters (blind to branch) to score:
Accuracy of interpretation
Depth of meaning (implicit/metaphoric detection)
Cultural appropriateness
Artistic/aesthetic quality
Additionally compute automatic metrics:
Semantic similarity (BERTScore, BLEURT)
Creativity/novelty (n-gram novelty)
CAS-6 variable traceability (if exposed)
D. Hypotheses

1. The CAS-6-augmented model will generate more semantically resonant and culturally grounded interpretations.
2. Outputs will show greater variation in connotative and artistic registers, indicating flexibility beyond literal prediction.
3. The CAS-6 injection will make model predictions more stable across paraphrased inputs and cross-cultural variations.
E. Analysis Plan

Statistical tests:
Mann--Whitney U test or t-test between Branch A and B human evaluation scores.
Correlation between CAS-6 weights and rated interpretive depth.
Qualitative analysis:
Case studies of key phrases illustrating success or failure in interpretive richness.
F. Reproducibility and Ethics

Datasets, annotations, and code will be open-sourced under CC BY-NC-SA.
Human annotators are compensated and instructed with clear guidelines to reduce bias.
No personally identifiable information is used.

Appendix II. Model Architecture Diagram

Overview

The architecture is based on a standard transformer decoder (e.g., GPT-style), with an auxiliary CAS-6 Interpretive Layer injected between the final transformer block and the language modeling head. This layer allows the model to modulate its output based not only on next-token probability but also on six semantically-rich interaction parameters.

A. CAS-6 Enhanced LLM Architecture

                  INPUT TOKENS                        

      ("crocodile", "tears", "eyes")                

                         

Mohon tunggu...

Lihat Konten Inovasi Selengkapnya
Lihat Inovasi Selengkapnya
Beri Komentar
Berkomentarlah secara bijaksana dan bertanggung jawab. Komentar sepenuhnya menjadi tanggung jawab komentator seperti diatur dalam UU ITE

Belum ada komentar. Jadilah yang pertama untuk memberikan komentar!
LAPORKAN KONTEN
Alasan
Laporkan Konten
Laporkan Akun