As the demand for high-speed, low-latency wireless services grows, cellular networks are becoming increasingly dense and complex. To overcome limitations associated with traditional orthogonal access methods, non-orthogonal multiple access (NOMA) offers a transformative solution aligned with fifth-generation (5G) and beyond-5G technologies [1]. NOMA employs superposition coding and power level variations among several users to share identical time and frequency resources [2]. Hence, this approach significantly enhances cellular systems by supporting higher user density, increasing overall capacity, and improving energy efficiency. A recent study investigated power-domain NOMA (PNOMA) [3], which allocates less power to users with better channel conditions while higher power to users with worse channel conditions. In a NOMA-enabled network, all user signals are combined into one superimposed signal and sent through a channel. At the receiving end, users with poorer channel conditions, receiving signals at higher power levels, can detect their signals by treating others as background noise. Conversely, users assigned lower power levels must utilize a technique known as successive interference cancellation (SIC) [4]. In this process, signals transmitted simultaneously are decoded sequentially by removing stronger interference signals first, making it easier to extract weaker signals. However, in a practical scenario [5,6,7], errors during the SIC process result in imperfect SIC conditions. In a fading environment, determining accurate channel state information (CSI) becomes challenging, leading to imperfect SCI and residual interference. Under Rayleigh fading [6], rapid and unpredictable signal strength variations can lead to errors in decoding higher-power user signals, thereby causing error propagation in the SIC process. This degradation adversely affects the overall system performance, increasing the bit error rate (BER) for users relying on accurate interference cancellation.
To address signal recognition challenges caused by multiple-user interference in NOMA systems, Sadat et al. [8] proposed a deep learning (DL) algorithm for accurately detecting signals, thereby significantly reducing recognition errors. Extending this concept, Xu et al. [9] highlighted the importance of DL-based channel estimation methods, demonstrating substantial improvements in symbol error rate (SER) performance across varying fading conditions. Further building on this idea, recent work [10] showcased how DL-based channel estimation under realistic Nakagami-m fading and user mobility scenarios can significantly enhance the accuracy of CSI, resulting in lower transmission power requirements and a reduced BER. Gaballa et al. [11] employed deep neural networks (DNNs) for optimizing transmission power allocation in NOMA systems. In particular, deep reinforcement learning techniques, such as the deep Q-network (DQN) algorithm [12], were introduced to effectively predict channel parameters, optimize resource allocation, and maximize user sum rate under stringent quality of service (QoS) and total power constraints. Further building upon these concepts, Dipinkrishnan et al. [13] conducted a comprehensive outage analysis for uplink and downlink NOMA systems, demonstrating how approximating the Rician fading model with a gamma distribution can notably reduce computational complexity, enhancing the practical feasibility and robustness of NOMA implementations for real-time cellular applications. Studies [14,15,16] have demonstrated the significance of efficient power allocation in real-time wireless activities such as net surfing, online gaming, video streaming, video calls, and VoIP. Each application has unique QoS demands and bandwidth requirements, necessitating tailored power allocation strategies. Video calls and VoIP calls [14] demand consistent, higher transmission power allocations due to their latency sensitivity and requirement for reliable connectivity. Conversely, applications like net surfing require comparatively lower power due to lower data rate demands. Online gaming and video streaming [15] stand between these extremes, emphasizing the necessity for adaptive and balanced power management to ensure user satisfaction without overburdening network resources. NOMA [17,18] emerges as an optimal choice for power allocation due to its inherent capability to serve multiple users within the same frequency and time resources by differentiating power levels.
To address these challenges, this study proposes a priority-fading weighted power allocation (PF-WPA) model to optimize power allocation in NOMA systems. The adoption of NOMA-based power allocation is significant as it efficiently accommodates multiple real-time applications (e.g., VoIP, video calls, online gaming, etc.) within limited bandwidth and stringent latency constraints by assigning differential power levels. The PF-WPA model dynamically manages power based on application-specific priorities and real-time channel conditions. To enhance adaptability and prediction accuracy in fluctuating environments (e.g., Rayleigh, Rician, Nakagami-m), it is embed directly into the kernel computations of the LSTM gates. Extensive graphical analyses evaluate the allocation efficiency of the proposed scheme. Furthermore, comprehensive simulations and comparisons with ConvLSTM and MLP-LSTM models demonstrate its superiority in terms of sum-rate performance, fairness, and convergence behavior.
The remainder of this paper is structured as follows: Section II introduces the proposed priority-fading weighted power allocation (PF-WPA) model. Section III details the integration of the PF-WPA scheme with the LSTM architecture. Section IV presents an in-depth analysis of power allocation under various fading scenarios. Section V provides a comprehensive validation of the proposed model through comparative assessments. Finally, Section VI concludes the paper with key findings and outlines potential directions for future research.