LLM Knowledge Cartography: Parameter Scaling and Factual Accuracy in Small Language Models
Abstract
This technical note documents an experimental investigation into factual accuracy across language models of varying parameter counts. Using a structured protocol of 25 questions spanning geography, science, history, culture, and technical domains, we assessed whether smaller language models could serve as reliable factual knowledge bases for constrained computational environments. Results reveal a clear scaling threshold: models below approximately 3 billion parameters exhibited systematic confabulation patterns, while larger models demonstrated reliable factual retrieval. These findings inform architectural decisions for the Macroscope environmental intelligence system, specifically regarding local versus cloud-based model deployment for sensor data interpretation.
Keywords
- confabulation
- edge computing
- factual accuracy
- knowledge engineering
- language models
- parameter scaling
- small language models
Access
AI Collaboration Disclosure
AI collaboratively designed experimental protocol, executed model queries across test subjects, and assisted in drafting the technical note. Human researcher directed all methodological decisions and verified all factual claims.
Human review: fullCite This Document
Permanent URL: https://canemah.org/archive/CNL-TN-2025-001