Ensemble Learning with Dynamic Weight Adjustment

Published • 2025

Abstract

Ensemble methods — combining the predictions of multiple models — are among the most reliable techniques in machine learning. But standard approaches to ensemble aggregation assume that the relative performance of individual models is constant across the entire input space. This assumption is almost always false: a model that excels in one region of the feature space may perform poorly in another. A linear regression model might outperform a neural network for certain patient profiles while the reverse is true for others. If we knew where each model was strong, we could weight their contributions accordingly — giving more influence to the models that are locally most accurate.

This project develops exactly that capability. I designed a dynamic weighting procedure for ensemble models that links aggregation weights to each observation’s location within the feature space. Rather than assigning fixed global weights, the algorithm optimizes a linking function that adjusts the weights based on feature-space position. The premise is simple but powerful: by recognizing that models have local strengths, we can construct ensembles that outperform any individual member and any fixed-weight combination.

The algorithm has been implemented, benchmarked against leading methods, and is under review at Pattern Recognition.