A MAMDANI-ANFIS BASED MULTIMODAL FUZZY INFERENCE MODEL FOR HETEROGENEOUS DATA INTEGRATION

loading.default
thumbnail.default.alt

item.page.date

item.page.journal-title

item.page.journal-issn

item.page.volume-title

item.page.publisher

Modern American Journals

item.page.abstract

In recent years, the integration of heterogeneous data sources has become a crucial challenge in developing intelligent decision-making systems. Traditional neuro-fuzzy architectures are typically designed to handle a single type of input data, which limits their applicability to real-world multimodal environments. This paper proposes a Mamdani-ANFIS based multimodal fuzzy inference model capable of processing and reasoning over numeric, categorical, textual, and visual data within a unified framework. The proposed model performs data-type-specific fuzzification, where numeric and categorical variables are fuzzified through Gaussian membership functions, while textual and visual inputs are first clustered and subsequently fuzzified based on the Gaussian distance between the cluster centroid and the current sample. This hybrid fuzzification mechanism enables adaptive and interpretable fuzzy reasoning across diverse data modalities.The inference mechanism follows the classical Mamdani approach, where fuzzy rules are constructed through the conjunction of input membership degrees, and the aggregated fuzzy outputs are defuzzified using the centroid method. The model parameters are tuned through hybrid learning involving gradient-based optimization and rule-weight adjustment. Experimental results on a heterogeneous benchmark dataset demonstrate that the proposed system achieves a classification accuracy of 0.69, outperforming baseline fuzzy and neural models in terms of interpretability and robustness to multimodal noise. The findings indicate that the integration of clustering-based fuzzification and Mamdani-type reasoning within an ANFIS structure provides a promising direction for intelligent systems that must learn from complex, cross-domain data. The proposed architecture contributes to bridging the gap between human-like interpretability and computational intelligence in multimodal learning environments.

item.page.description

item.page.subject

item.page.citation

item.page.collections

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced