Machine learning mathematical optimization techniques for enhancing model efficiency and convergence

Mamatha N 1 and Bhuvaneshwari Shetty 2, *

1 Lecturer in Science Department, Karnataka (Govt) Polytechnic Mangalore, Karnataka, India.
2 Lecturer in Computer Science Department, Government Polytechnic for Women, Bondel Mangalore, Karnataka, India.
 
Review Article
World Journal of Advanced Research and Reviews, 2021, 10(03), 471-481
Article DOI: 10.30574/wjarr.2021.10.3.0235
 
Publication history: 
Received on 17 April 2021; revised on 30 May 2021; accepted on 02 June 2021
 
Abstract: 
Mathematical optimization plays a crucial role in machine learning, providing the foundation for efficient model training, parameter tuning, and convergence improvement. Effective optimization techniques enhance the learning process by minimizing loss functions, improving generalization, and accelerating convergence rates. This paper explores various optimization methods, including convex and non-convex optimization, gradient-based approaches such as stochastic gradient descent (SGD), Adam, and RMSprop, as well as gradient-free techniques like evolutionary algorithms and Bayesian optimization. We analyze their theoretical foundations, computational complexity, and practical implications in different machine learning tasks, including supervised and unsupervised learning, reinforcement learning, and deep learning. Furthermore, we present a comparative analysis of these optimization strategies, supported by mathematical formulations, empirical results, and visual representations such as equations, figures, and tables. The findings of this study aim to provide insights into selecting appropriate optimization techniques based on problem characteristics, model architecture, and computational constraints. Finally, we discuss emerging trends in optimization, including second-order methods, meta-learning approaches, and hybrid optimization frameworks, highlighting their potential to further enhance model efficiency and convergence in real-world applications.
 
Keywords: 
Mathematical Optimization; Machine Learning, Gradient Descent; Stochastic Gradient Descent (SGD); Adam Optimizer; Convex and Non-Convex Optimization; Gradient-Free Optimization
 
Full text article in PDF: 
Share this