rss_2.0International Journal on Smart Sensing and Intelligent Systems FeedSciendo RSS Feed for International Journal on Smart Sensing and Intelligent Systems Journal on Smart Sensing and Intelligent Systems Feed Parkinson's Disease Detection: A Review of Techniques, Datasets, Modalities, and Open Challenges<abstract> <title style='display:none'>Abstract</title> <p>Parkinson's disease (PsD) is a prevalent neurodegenerative malady, which keeps intensifying with age. It is acquired by the progressive demise of the dopaminergic neurons existing in the substantia nigra pars compacta region of the human brain. In the absence of a single accurate test, and due to the dependency on the doctors, intensive research is being carried out to automate the early disease detection and predict disease severity also. In this study, a detailed review of various artificial intelligence (AI) models applied to different datasets across different modalities has been presented. The emotional intelligence (EI) modality, which can be used for the early detection and can help in maintaining a comfortable lifestyle, has been identified. EI is a predominant, emerging technology that can be used to detect PsD at the initial stages and to enhance the socialization of the PsD patients and their attendants. Challenges and possibilities that can assist in bridging the differences between the fast-growing technologies meant to detect PsD and the actual implementation of the automated PsD detection model are presented in this research. This review highlights the prominence of using the support vector machine (SVM) classifier in achieving an accuracy of about 99% in many modalities such as magnetic resonance imaging (MRI), speech, and electroencephalogram (EEG). A 100% accuracy is achieved in the EEG and handwriting modality using convolutional neural network (CNN) and optimized crow search algorithm (OCSA), respectively. Also, an accuracy of 95% is achieved in PsD progression detection using Bagged Tree, artificial neural network (ANN), and SVM. The maximum accuracy of 99% is attained using K-nearest Neighbors (KNN) and Naïve Bayes classifiers on EEG signals using EI. The most widely used dataset is identified as the Parkinson's Progression Markers Initiative (PPMI) database.</p> </abstract>ARTICLEtrue Advances in PCG Signal Analysis using AI: A Review<abstract> <title style='display:none'>Abstract</title> <p>The paper reviews the milestones and various modern-day approaches in developing phonocardiogram (PCG) signal analysis. It also explains the different phases and methods of the Heart Sound signal analysis. Many physicians depend heavily on ECG experts, inviting healthcare costs and ignorance of stethoscope skills. Hence, auscultation is not a simple solution for the detection of valvular heart disease; therefore, doctors prefer clinical evaluation using Doppler Echo-cardiogram and another pathological test. However, the benefits of auscultation and other clinical evaluation can be associated with computer-aided diagnosis methods that can help considerably in measuring and analyzing various Heart Sounds. This review covers the most recent research for segmenting valvular Heart Sound during preprocessing stages, like adaptive fuzzy system, Shannon energy, time-frequency representation, and discrete wavelet distribution for analyzing and diagnosing various heart-related diseases. Different Convolutional Neural Network (CNN) based deep-learning models are discussed for valvular Heart Sound analysis, like LeNet-5, AlexNet, VGG16, VGG19, DenseNet121, Inception Net, Residual Net, Google Net, Mobile Net, Squeeze Net, and Xception Net. Among all deep-learning methods, the Xception Net claimed the highest accuracy of 99.43 + 0.03% and sensitivity of 98.58 + 0.06%. The review also provides the recent advances in the feature extraction and classification techniques of Cardiac Sound, which helps researchers and readers to a great extent.</p> </abstract>ARTICLEtrue AI for binary and multi-class classification of leukemia using a modified transfer learning ensemble model<abstract> <title style='display:none'>Abstract</title> <p>In leukemia diagnosis, automating the process of decision-making can reduce the impact of individual pathologists' expertise. While deep learning models have demonstrated promise in disease diagnosis, combining them can yield superior results. This research introduces an ensemble model that merges two pre-trained deep learning models, namely, VGG-16 and Inception, using transfer learning. It aims to accurately classify leukemia subtypes using real and standard dataset images, focusing on interpretability. Therefore, the use of Local Interpretable Model-Agnostic Explanations (LIME) is employed to achieve interpretability. The ensemble model achieves an accuracy of 83.33% in binary classification, outperforming individual models. In multi-class classification, VGG-16 and Inception reach accuracies of 83.335% and 93.33%, respectively, while the ensemble model reaches an accuracy of 100%.</p> </abstract>ARTICLEtrue Power Domain Noma Transmission Using Relays<abstract> <title style='display:none'>Abstract</title> <p>The non-orthogonal multiple access (NOMA) multiple access technique, due to its non-orthogonality and providing access to users together, which have the same frequency and time resource, made it a front runner to meet the need of high traffic requirements networks. In this paper, a downlink, NOMA, and cooperative NOMA (CNOMA) are compared with varying different parameters: source transmit power, user transmit power, and power allocation for achievable sum rates. Simulation results show that the CNOMA achieves a higher sum rate as compared to NOMA for all the parameters.</p> </abstract>ARTICLEtrue temperature measurement technique using optical channel as a signal transmitting media<abstract> <title style='display:none'>Abstract</title> <p>Temperature measurement and transmission of a signal safely to the control room for further processing is important for the process industry. In this paper, a modified head-mounted temperature measurement system using a thermocouple with opto-isolation has been developed. Here, the thermocouple is connected to the terminals, mounted on the ceramic base in the head of the thermo-well. It consists of two signal conditioners for thermocouple and AD590, both signal conditioning outputs applied to a summer circuit. The output of the summer circuit which is in the range of 1.73–3.43V, adjusted by a signal conditioning circuit, is applied to the middle electrode of Mach-Zehnder interferometer (MZI). MZI produces normalized optical signals according to the variations in temperature. These optical signals are then transmitted to the control room safely in the inflammable process industry. The transmitted signals are demodulated in the control room and then sent to the PC through an Opto-isolator circuit and DAS card. The necessary theory as well as mathematical equation has been derived. The experimental and simulation results are reported here.</p> </abstract>ARTICLEtrue of insulation degradation by-products in transformer oil using ZnO coated IDC sensor<abstract> <title style='display:none'>Abstract</title> <p>Condition monitoring of oil-immersed in-service transformers to facilitate preventive maintenance is still a challenge. Monitoring of 2-Furfuryldehyde (2-FAL), released in the transformer oil as a result of paper insulation degradation, and moisture ingress can provide insight into the health of the insulation of transformers. Since 2-FAL and moisture are high dielectric constant contamination, capacitive sensor-based detection is a potential solution. A novel Inter digital Capacitive (IDC) sensor is reported in this paper to measure the concentration of 2-FAL and moisture uses Zinc Oxide (ZnO) as a sensing film. The sensor shows good sensitivity, approximately linear characteristics, and low characteristic drift.</p> </abstract>ARTICLEtrue the development of the industrial complex: Southern Federal District<abstract> <title style='display:none'>Abstract</title> <p>The purpose of this study is to build a cognitive model of the system–industrial complex of the Southern Federal District of Russia. The Southern Federal District is a region in the south of Russia, which includes the regions of Rostov, Volgograd, Astrakhan, and Krasnodar; the Republics of Adygea, Kalmykia, and Crimea; and the federal city of Sevastopol. The object of the study was the industrial complex of Southern Federal District, which ranks 6th in terms of the share of manufacturing among the federal districts. The study was conducted at the Southern Federal University, which is located in the Southern Federal District of Russia. This article discusses the use of the cognitive modeling method, which allows us to determine the degree of system stability and the direction of system development under varying degrees of influence of external factors. This method is used to build forecasts of the development of poorly structured socioeconomic systems. The cognitive modeling method made it possible to predict one of the scenarios for the development of the industrial complex of the Southern region of the Russian Federation. For the subject under study, indicative, controlling, and regulating peaks have been identified, but the difficulty of forecasting is noted due to the lack of a general program for the industrial complex of the federal district. The article presents scenarios of industrial development of the Rostov and Krasnodar regions as the most industrially developed regions in the Southern Federal District. Cognitive modeling allows building scenarios of development of the studied systems of regions by means of impulse influence, that is, when one vertex affects another vertex. The modeling results made it possible to build a cognitive map of the industrial complex of the Southern Federal District. This map showed that in the case of impulse impact on the industrial complex, it is necessary to strengthen regulatory measures on the part of the state.</p> </abstract>ARTICLEtrue block size selection in a hybrid image compression algorithm employing the DCT and SVD<abstract> <title style='display:none'>Abstract</title> <p>The rationale behind this research stems from practical implementations in real-world scenarios, recognizing the critical importance of efficient image compression in fields such as medical imaging, remote sensing, and multimedia communication. This study introduces a hybrid image compression technique that employs adaptive block size selection and a synergistic combination of the discrete cosine transform (DCT) and singular value decomposition (SVD) to enhance compression efficiency while maintaining picture quality. Motivated by the potential to achieve significant compression ratios imperceptible to human observers, the hybrid approach addresses the escalating need for real-time image processing. The study pushes the boundaries of image compression by developing an algorithm that effectively combines conventional approaches with the intricacies of modern images, aiming for high compression ratios, adaptive picture content, and real-time efficiency. This article presents a novel hybrid algorithm that dynamically combines the DCT, SVD, and adaptive block size selection to enhance compression performance while keeping image quality constant. The proposed technique exhibits noteworthy accomplishments, achieving compression ratios of up to 60% and a peak signal-to-noise ratio (PSNR) exceeding 35 dB. Comparative evaluations demonstrate the algorithm’s superiority over existing approaches in terms of compression efficiency and quality measures. The adaptability of this hybrid approach makes significant contributions across various disciplines. In multimedia, it enhances data utilization while preserving image integrity; in medical imaging, it guarantees accurate diagnosis with compression-induced distortion (CID) below 1%; and in remote sensing, it efficiently manages large datasets, reducing expenses. The flexibility of this algorithm positions it as a valuable tool for future advancements in the rapidly evolving landscape of technology.</p> </abstract>ARTICLEtrue Modeling as a Forecasting Tool<abstract> <title style='display:none'>Abstract</title> <p>Under the current geopolitical conditions and the economic sanctions imposed on Russia, there is an objective need to formulate a strategic development plan for the economy as a whole and for specific sectors of the economy. Various methods and tools can be used for strategic planning. One of the methods of strategic planning is the program-targeted method of planning, which has proved to be an effective method of foresight. It is possible to speak about failures of planning activities, but these failures were related not only to shortcomings and application of science-based planning methods but also to the efficiency of the managerial apparatus, which took decisions. It should be noted that it was in the period when science-based planning methods were applied that our country managed to form and develop industrial production in various sectors, and the issue of import substitution did not arise then, as all the stages of the product life cycle were represented at all the enterprises. Currently, the country is facing the problem of strategic development in the context of the imposed economic sanctions. The volume of sanctions is increasing day by day and one can only speculate on the future restrictions imposed. Therefore, there is a need to forecast activities at the level of the whole country, individual industries, and enterprises. One such method is cognitive modeling based on fuzzy logic. This approach involves the use of cognitive principles and methods to understand the behavior of individuals in the system, as well as the interactions and feedback loops between the various components. The purpose of this paper is to retrospectively analyze the application of the cognitive method to modeling. Information systems that have been developed in our country to implement the tasks of cognitive modeling are reviewed, and an assessment of existing software products is made. Also, theoretical materials on cognitive approach in modeling are presented in order to understand the application of this toolkit for modeling socioeconomic systems using elements of fuzzy logic.</p> </abstract>ARTICLEtrue of outlier detection approaches in a Smart Cities sensor data context<abstract> <title style='display:none'>Abstract</title> <p>This study examines outlier detection in time-series sensor data from PurpleAir low-cost sensors in Athens, Greece. Focusing on key environmental parameters such as temperature, humidity, and particulate matter (PM) levels, the study utilizes the Interquartile Range (IQR) and Generalized Extreme Studentized Deviate (GESD) methods on hourly and daily basis. GESD detected more outliers than IQR, most of them in PM, while temperature and humidity data had fewer outliers; applying filters before outlier detection and adjusting alpha values based on time scales were crucial, and outliers significantly affected spatial interpolation, emphasizing the need for spatial statistics in smart city air quality management.</p> </abstract>ARTICLEtrue Validation: Perception and Localization Systems for Autonomous Vehicles using the Extended Kalman Filter Algorithm<abstract> <title style='display:none'>Abstract</title> <p>This study validates EKF-SLAM for indoor autonomous vehicles by experimentally integrating the MPU6050 sensor and encoder data using an extended Kalman filter. Real-world tests show significant improvements, achieving high accuracy with just 1% and 3% errors in the X and Y axes. RPLiDAR A1M8 is utilized for mapping, producing accurate maps visualized through RViz-ROS. The research demonstrates the novelty and practical utility of EKF-SLAM in real-world scenarios, showcasing unprecedented effectiveness and precision.</p> </abstract>ARTICLEtrue Highly Selective Real-Time Electroanalytical Detection of Sulfide Via Laser-Induced Graphene Sensor<abstract> <title style='display:none'>Abstract</title> <p>Herein, a novel miniaturized sensor for sulfide detection is presented. The sensor was fabricated over a flexible polyimide substrate via CO<sub>2</sub> laser ablation followed by surface modification with methylene blue acting as a redox mediator. The sensor showed an acceptable linear detection range (0.5 μM–1 mM), and excellent limit of detection (0.435 μM) and limit of quantification (2.45 μM). Further, remarkable sensitivity of 0.295 μA/(μM mm<sup>2</sup>) for 0.5–50 μM and 0.0047 μA/(μM mm<sup>2</sup>) for 100–1 mM was obtained. The signal-to-noise ratio was found to be 2.76 and the performance was validated by real lake water samples.</p> </abstract>ARTICLEtrue Selection and Optimization of Weights using Artificial Neural Network in Heterogeneous Wireless Environment<abstract> <title style='display:none'>Abstract</title> <p>Given the variety of networks, interfaces, mediums, and other resources accessible in a wireless heterogeneous location today, mobile communication is in a healthy competitive environment. When clients or users have access to numerous interfaces at once, a problem occurs. Users therefore require a clever or intelligent system to connect them to the finest services based on their needs and choices. Interface management controls the interfaces that are accessible and links the user to the best. The intelligent usage of various radio accesses/interfaces is made possible in this research by interface management with artificial neural networks (ANNs). Various parameters of different networks are used to make the choice. In this paper, a back propagation neural network (BPNN) method for switching between 3G, WLAN, 4G, and 5G networks is suggested. By allocating appropriate weights, the various network properties are employed as selection parameters. Fuzzy Analytic Hierarchy Process (FAHP) is used to initialize the weights, and the BPNN is used to optimize them. In order to obtain the best value, the goal value and the original value are compared, and the difference between them is used as an adjusting value for the weights. The network is trained using backpropagation techniques. The comparison between the proposed algorithm and the current algorithm demonstrates the new method's flexibility and the network's optimal connectivity with a high rate of successful handovers.</p> </abstract>ARTICLEtrue Scene Flow Estimation with the GVF Snake Model for Obstacle Detection Using a 3D Sensor in the Path-Planning Module<abstract> <title style='display:none'>Abstract</title> <p>The novel framework for estimating dense scene flow using depth camera data is demonstrated in the article. Using these estimated flow vectors to identify obstacles improves the path planning module of the autonomous vehicle's (AV) intelligence. The primary difficulty in the development of AVs has been thought to be path planning in cluttered environments. These vehicles must possess the intelligence to recognize their surroundings and successfully navigate around obstacles. The AV needs a thorough understanding of the surroundings to detect and avoid obstacles in a cluttered environment. Therefore, when determining the course, it is preferable to be aware of the kinematic behavior (position and the direction) of the obstacles. As a result, by comparing the depth images between different time frames, the position and direction of the obstacles are calculated using a 3D vision sensor. The current study focuses on the extraction of the flow vectors in 3D coordinates from the differential scene flow method. Generally, the evaluation of scene flow algorithms is crucial in determining their accuracy and effectiveness in different applications. The gradient of the vector field snake model, which extracts changes in pixel values in three directions, is combined with the scene flow technique to identify both static and dynamic obstacles. Our goal is to create a single-vision sensor-based real-time obstacle avoidance method based on scene flow estimation. In addition, the common evaluation metrics such as endpoint error (EPE), average angular error (AAE), and standard deviation angular error (STDAE) are used to measure the accuracy of different algorithms in terms of computational errors with the benchmark Middlebury datasets. The proposed technique is validated with different experiments using a Pixel-Mixed-Device (PMD) camera and a Kinect sensor as 3D sensors. Finally, the numerical and experimental results are displayed and reported.</p> </abstract>ARTICLEtrue of Sensor Network based on Non-linear Data Analysis for Leaching of Ionic Rare Earth Ore Bodies under Similar Simulation Experiments<abstract> <title style='display:none'>Abstract</title> <p>Ionic rare earth (RE) mines use the <italic>in situ</italic> leaching (ISL) method to operate, but after investigation, it is found that the geological damage caused by RE mines is still serious in recent years. The structure is damaged, which affects the stability of the mine slope and even causes geological disasters such as landslides and collapses to a certain extent, which has a huge impact on the lives of local people. Therefore, it is very important to analyze the slope stability of RE mines. In this study, indoor similarity simulation experiments were conducted using the sensor network (SN) of non-linear data analysis, and its stability of the ISL of ionic RE ore bodies was studied. In addition, an indoor column leaching simulation experiment was carried out to observe the internal fine and microstructure of the ore sample during the whole leaching process using nuclear magnetic resonance (NMR) under the conditions of magnesium sulfate solution and water leaching, respectively. The evolution mechanism of the pore structure was analyzed during the leaching process. The experiments showed that during the whole leaching process, the ore body reaches saturation in the early stages, resulting in a sharp increase in the porosity of the ore sample within the first 1 h due to seepage. Subsequently, the porosity of each sample increases, indicating that the seepage of the leaching solution inside the sample has reached a relatively stable state.</p> </abstract>ARTICLEtrue Analysis of Vegetation Dynamics Using Satellite Data and Machine Learning Algorithms: A Multi-Factorial Approach<abstract> <title style='display:none'>Abstract</title> <p>Accurate vegetation analysis is crucial amid accelerating global changes and human activities. Achieving precise characterization with multi-temporal Sentinel-2 data is challenging. In this article, we present a comprehensive analysis of 2021's seasonal vegetation cover in Greater Sydney using Google Earth Engine (GEE) to process Sentinel-2 data. Using the random forest (RF) method, we performed image classification for vegetation patterns. Supplementary factors such as topographic elements, texture information, and vegetation indices enhanced the process and overcome limited input variables. Our model outperformed existing methods, offering superior insights into season-based vegetation dynamics. Multi-temporal Sentinel-2 data, topographic elements, vegetation indices, and textural factors proved to be critical for accurate analysis. Leveraging GEE and rich Sentinel-2 data, our study would benefit decision-makers involved in vegetation monitoring.</p> </abstract>ARTICLEtrue in Unmanned Aerial Vehicles: a Review<abstract> <title style='display:none'>Abstract</title> <sec><title style='display:none'>Context</title> <p>With the rapid advancement of unmanned aerial vehicle (UAV) technology, ensuring these autonomous systems’ security and integrity is paramount. UAVs are susceptible to cyberattacks, including unauthorized access, control, or manipulation of their systems, leading to potential safety risks or unauthorized data retrieval. Moreover, UAVs encounter limited computing resources, wireless communication and physical vulnerabilities, evolving threats and techniques, necessity for compliance with regulations, and human factors.</p> </sec> <sec><title style='display:none'>Methods</title> <p>This review explores the potential cyberthreats faced by UAVs, including hacking, spoofing, and data breaches, and highlights the critical need for robust security measures. It examines various strategies and techniques used to protect UAVs from cyberattacks, e.g., encryption, authentication, and intrusion detection systems using cyberthreat analysis and assessment algorithms. The approach to assess the UAVs’ cybersecurity hazards included STRIDE (a model for identifying computer security-related threats) connected with the threats considered.</p> </sec> <sec><title style='display:none'>Findings</title> <p>Emphasis was laid on the evaluation highly depending on the accuracy of UAV mission definition, potential intruders, and social and other human-related situations. The review discovered that most studies focused on possible intruders’ portraits, which can be crucial when conducting a cybersecurity assessment. Based on a review, future research directions to mitigate cybersecurity risks are presented.</p> </sec> <sec><title style='display:none'>Significance</title> <p>Protecting UAVs from cyberthreats ensures safe operations and data integrity and preserves public trust in autonomous systems.</p> </sec> </abstract>ARTICLEtrue Evaluation of ST-Based Methods for Simulating and Analyzing Power Quality Disturbances<abstract> <title style='display:none'>Abstract</title> <p>The complexity of power quality disturbances (PQDs) is a significant risk factor in the electricity sector. An accurate and fast analysis of these disturbances provides crucial information to cover all the issues related to power quality. The main objective of this study is to explore a new analytic technique, including all kinds of disturbances that can appear in electrical networks, that differs from previous technologies such as the Fourier transform. Three methods based on the Stockwell transform, namely, the discrete orthonormal Stockwell transform (DOST), discrete cosine Stockwell transform (DCST), and discrete cosine transform (DCT), were used to analyze PQDs in time–frequency representation. These methods diagnose the disturbance's signal properties, which are dependent on resolution and absolute phase information. Nine PQDs, including normal sine waves, were mathematically modeled and used to evaluate the proposed methods. All the methods can effectively simulate and analyze PQDs. Among them, DOST is the most effective in providing clear and high-resolution time–frequency representations of signals. The classification of disturbances was fulfilled based on statistical features extracted from matrices derived from Stockwell transform-based methods, such as analytic approaches (mean, variation, standard deviation, entropy, skewness, and kurtosis). Neural networks, a method utilizing intelligence classifiers, were used for pattern recognition, and the patterns of the different methods were compared. Simulation results proved that DOST needs fewer samples than other methods; its capability to deal with signals in time–frequency resolution is also more viable. The neural network classifier has a higher accuracy rate than the K-nearest neighbor and decision tree methods and approximates the support vector machine method.</p> </abstract>ARTICLEtrue Characterization Using MIMO Radar<abstract> <title style='display:none'>Abstract</title> <p>The exploitation of coherency gain and diversity gain to improve the MIMO system performance is a burning research topic. This paper is to examine the performance analysis of MIMO radar by utilizing the above-said gains. The authors have analyzed the performance of the MIMO radar, in terms of mathematical modeling, considering the probability of detection and post-processing SNR, with respect to changes in the diversity order. Furthermore, this paper also deals with the practical implementation and analysis of the said system, demonstrating the range imaging and RCS pattern of a known standard target.</p> </abstract>ARTICLEtrue optimization in neural-based translation systems: A case study<abstract> <title style='display:none'>Abstract</title> <p>Machine translation (MT) is an important use case in natural language processing (NLP) that converts a source language to a target language automatically. Modern intelligent system or artificial intelligence (AI) uses a machine learning approach and the machine has acquired learning ability using datasets. Nowadays, in the MT domain, the neural machine translation (NMT) system has almost replaced the statistical machine translation (SMT) system. The NMT systems use a deep learning framework in their implementation. To achieve higher accuracy during the training of the NMT model, extensive hyper-parameter tuning is required. The paper highlights the significance of hyper-parameter tuning in various machine learning algorithms. And as a case study, in-house experimentation was conducted on a low-resource English–Bangla language pair by designing an NMT system and the significance of various hyper-parameter optimizations was analyzed while evaluating its performance with an automatic metric BLEU. The BLEU scores obtained for the first, second, and third randomly picked test sentences are 4.1, 3.2, and 3.01, respectively.</p> </abstract>ARTICLEtrue