rss_2.0Cybernetics and Information Technologies FeedSciendo RSS Feed for Cybernetics and Information Technologieshttps://sciendo.com/journal/CAIThttps://www.sciendo.comCybernetics and Information Technologies Feedhttps://sciendo-parsed.s3.eu-central-1.amazonaws.com/647118d42b88470fbea151a2/cover-image.jpghttps://sciendo.com/journal/CAIT140216Software Requirement Smells and Detection Techniques: A Systematic Literature Reviewhttps://sciendo.com/article/10.2478/cait-2024-0037<abstract><title style='display:none'>Abstract</title> <p>One of the major reasons for software project failure is poor requirements, so numerous requirement smells detection solutions are proposed. Critical appraisal of the proposed requirement fault detection methods is crucial for refining knowledge of requirement smells and developing new research ideas. The objective of this paper was to systematically review studies that focused on detecting requirement discrepancies in textual requirements. After applying inclusion and exclusion criteria and forward and backward snowball sampling techniques using database-specific search queries, 19 primary studies were selected. A deep analysis of the studies shows that classical NLP-based requirement smells detection techniques are the most commonly used ones and ambiguity is the requirement smell that has the utmost attention. Further investigation depicts the scarcity of open-access datasets, and tools employed to detect requirement faults. The review has also revealed there is no comprehensive definition and classification of requirement smells.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00372024-12-18T00:00:00.000+00:00New Image Crypto-Compression Scheme Based on Ecc and Chaos Theory for High-Speed and Reliable Transmission of Medical Images in the IOMThttps://sciendo.com/article/10.2478/cait-2024-0038<abstract><title style='display:none'>Abstract</title> <p>The rapid advancement of IoT has significantly transformed the healthcare sector, leading to the emergence of the Internet of Medical Things (IoMT). Ensuring the security and privacy of medical data is crucial when integrating with smart and intelligent sensor devices within the hospital environment. In this context, we propose a lightweight crypto-compression scheme based on Elliptic Curve Cryptography (ECC) and Chaos theory to secure the medical images in IoMT Applications. The primary innovation in this method involves generating dynamic S-box and keys using the ECC mechanism and PieceWise Linear Chaotic Map (PWLCM). The Wavelet Transform Technology is employed in compression, and the compressed images are secured within an IoT framework. The proposed methodology has been performed in the experiments on various medical images. The findings and Security analysis reveal that the proposed method is more powerful and useful for secure medical image transmission in the IoT ecosystem.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00382024-12-18T00:00:00.000+00:00Transmission Map Refinement Using Laplacian Transform on Single Image Dehazing Based on Dark Channel Prior Approachhttps://sciendo.com/article/10.2478/cait-2024-0039<abstract><title style='display:none'>Abstract</title> <p>Computer vision requires high-quality input images to facilitate image interpretation and analysis tasks. However, the image acquisition process does not always produce good-quality images. In outdoor environments, image quality is determined by weather or environmental conditions. Bad weather conditions due to pollution particles in the atmosphere such as smoke, fog, and haze can degrade image quality, such as contrast, brightness, and sharpness. This research proposes to obtain a better haze-free image from a hazy image by utilizing the Laplacian filtering and image enhancement techniques in the transmission map reconstruction based on the dark channel prior approach. Experimental results show that the proposed method could improve the visual quality of the dehazed images from 45% to 56% compared to the ground-truth images. The proposed method is also fairly competitive compared to similar methods in the same domain.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00392024-12-18T00:00:00.000+00:00Deep Learning-Based Travel Time Estimation in Hiking with Consideration of Individual Walking Abilityhttps://sciendo.com/article/10.2478/cait-2024-0033<abstract><title style='display:none'>Abstract</title> <p>Hiking is popular, but mountain accidents are serious problems. Accurately predicting hiking travel time is an essential factor in preventing mountain accidents. However, it is challenging to accurately reflect individual hiking ability and the effects of fatigue in travel time estimation. Therefore, this study proposes a deep learning model, “HikingTTE”, for estimating arrival times when hiking. HikingTTE estimates hiking travel time by considering complex factors such as individual hiking ability, changes in walking pace, terrain, and elevation. The proposed model achieved significantly higher accuracy than existing hiking travel time estimation methods based on the relation between slope and speed. Furthermore, HikingTTE demonstrated higher accuracy in predicting hiking arrival times than a deep learning model originally developed to estimate taxi arrival times. The source code of HikingTTE is available on github for future development of the travel time estimation task.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00332024-12-18T00:00:00.000+00:00An Interface for Linking Ancient Languageshttps://sciendo.com/article/10.2478/cait-2024-0042<abstract><title style='display:none'>Abstract</title> <p>This paper focuses on the linking potentials offered by the EpiLexO web-based front-end for creating and editing an ecosystem of digital resources for ancient languages, developed in the context of a project on the languages of fragmentary attestation of ancient Italy. The focus is particularly on mechanisms introduced for linking lexical information to other information bits either internally or externally, e.g., for creating attestations by linking lexical forms to their variants in relevant inscriptions, as well as for linking lexical data to external independent LOD datasets available on a remote endpoint. Finally, in the conclusions, we briefly introduce some future planned or desired enhancements as well as the final platform component, a parallel interface that constitutes the fruition application, which will be open to anyone on the web and will allow for browsing, searching, cross-querying and visualizing the created set of interlinked resources.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00422024-12-18T00:00:00.000+00:00A Cost-Benefit Model for Feasible IoT Edge Resources Scalability to Improve Real-Time Processing Performancehttps://sciendo.com/article/10.2478/cait-2024-0036<abstract><title style='display:none'>Abstract</title> <p>Edge computing systems have emerged to facilitate real-time processing for delay-sensitive tasks in Internet of Things (IoT) Systems. As the volume of generated data and the real-time tasks increase, more pressure on edge servers is created. This eventually reduces the ability of edge servers to meet the processing deadlines for such delay-sensitive tasks, degrading users’ satisfaction and revenues. At some point, scaling up the edge servers’ processing resources might be needed to maintain user satisfaction. However, enterprises need to know if the cost of that scalability will be feasible in generating the required return on the investment and reducing the forgone revenues. This paper introduces a cost-benefit model that values the cost of edge processing resources scalability and the benefit of maintaining user satisfaction. We simulated our cost-benefit model to show its ability to decide whether the scalability will be feasible using different scenarios.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00362024-12-18T00:00:00.000+00:00Text+ – Concept and Benefits for Empirical Researchershttps://sciendo.com/article/10.2478/cait-2024-0040<abstract><title style='display:none'>Abstract</title> <p>In this contribution, we report on ongoing efforts in the German national research infrastructure consortium Text+ to make research data and services for text- and language-oriented disciplines FAIR, that is findable, accessible, interoperable, and reusable, as well as compliant with the CARE principles for language resources.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00402024-12-18T00:00:00.000+00:00A Framework for Analysing Disinformation Narratives: Ukrainian Refugees in Bulgariahttps://sciendo.com/article/10.2478/cait-2024-0043<abstract><title style='display:none'>Abstract</title> <p>This article presents a methodological framework for analyzing disinformation narratives, emphasizing the significance of localized contextualization, particularly the influence of cultural and historical factors embedded within these narratives. Understanding these elements is crucial for unpacking the dynamics and power relations present in disinformation discourses. The study focuses on misleading information regarding Ukrainian refugees in Bulgaria, a country vulnerable to disinformation yet often overlooked in research, partly due to its linguistic context. Additionally, the paper advocates for the application of Gramscian theories of hegemony and the “war of position” as contextual lenses to enhance the theoretical and methodological framework. This framework employs a discourse analysis approach, supplemented by Natural Language Processing (NLP), enabling the capture of critical aspects of disinformation dynamics and yielding multi-layered, informative, and actionable insights.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00432024-12-18T00:00:00.000+00:00The Browser-Based GLAUx Treebank Infrastructure: Framework, Functionality, and Futurehttps://sciendo.com/article/10.2478/cait-2024-0041<abstract><title style='display:none'>Abstract</title> <p>This paper presents the browser-based treebank infrastructure of GLAUx (the Greek Language AUtomated). This linguistic annotation project now has its integrated and user-friendly platform for exploring this data. After discussing the size and types of texts included in the GLAUx corpus, the contribution succinctly surveys the types of linguistic annotation covered by the project (morphology, lemmatization, and syntax). The emphasis of the contribution is on a description of the underlying SQL database structure and the search architecture. Infrastructure-related challenges faced by the GLAUx project are also discussed. Finally, the paper concludes with a discussion of future steps for the project, including additional functionality and expansion of the corpus.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00412024-12-18T00:00:00.000+00:00Billion-Scale Similarity Search Using a Hybrid Indexing Approach with Advanced Filteringhttps://sciendo.com/article/10.2478/cait-2024-0035<abstract><title style='display:none'>Abstract</title> <p>This paper presents a novel approach for similarity search with complex filtering capabilities on billion-scale datasets, optimized for CPU inference. Our method extends the classical IVF-Flat index structure to integrate multi-dimensional filters. The proposed algorithm combines dense embeddings with discrete filtering attributes, enabling fast retrieval in high-dimensional spaces. Designed specifically for CPU-based systems, our disk-based approach offers a cost-effective solution for large-scale similarity search. We demonstrate the effectiveness of our method through a case study, showcasing its potential for various practical uses.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00352024-12-18T00:00:00.000+00:00Latest Advancements in Credit Risk Assessment with Machine Learning and Deep Learning Techniqueshttps://sciendo.com/article/10.2478/cait-2024-0034<abstract><title style='display:none'>Abstract</title> <p>A loan is vital for individuals and organizations to meet their goals. However, financial institutions face challenges like managing losses and missed opportunities in loan decisions. A key issue is the imbalanced datasets in credit risk assessment, hindering accurate predictions of defaulters. Previous research has utilized machine learning techniques, including single or multiple classifier systems, ensemble methods, and class-balancing approaches. This review summarizes various factors and machine learning methods for assessing credit risk, presented in a tabular format to provide valuable insights for researchers. It covers data complexity, minority class distribution, sampling techniques, feature selection, and meta-learning parameters. The goal is to help develop novel algorithms that outperform existing methods. Even a slight improvement in defaulter prediction rates could significantly influence society by saving millions for lenders.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00342024-12-18T00:00:00.000+00:00A Systematic Review of Rapidly Exploring Random Tree RRT Algorithm for Single and Multiple Robotshttps://sciendo.com/article/10.2478/cait-2024-0026<abstract> <title style='display:none'>Abstract</title> <p>Recent advances in path-planning algorithms have transformed robotics. The Rapidly exploring Random Tree (RRT) algorithm underpins autonomous robot navigation. This paper systematically examines the uses and development of RRT algorithms in single and multiple robots to demonstrate their importance in modern robotics studies. To do this, we have reviewed 70 works on RRT algorithms in single and multiple robot path planning from 2015 to 2023. RRT algorithm evolution, including crucial turning points and innovative techniques, have been examined. A detailed comparison of the RRT Algorithm versions reveals their merits, limitations, and development potential. The review’s identification of developing regions and future research initiatives will enable roboticists to use RRT algorithms. This thorough review is essential to the robotics community, inspiring new ideas, helping problem-solving, and expediting single- and multi-robot system development. This highlights the necessity of RRT algorithms for autonomous and collaborative robotics advancement.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00262024-09-19T00:00:00.000+00:00ANFIS-AMAL: Android Malware Threat Assessment Using Ensemble of ANFIS and GWOhttps://sciendo.com/article/10.2478/cait-2024-0024<abstract> <title style='display:none'>Abstract</title> <p>The Android malware has various features and capabilities. Various malware has distinctive characteristics. Ransomware threatens financial loss and system lockdown. This paper proposes a threat-assessing approach using the Grey Wolf Optimizer (GWO) to train and tune the Adaptive Neuro-Fuzzy Inference System (ANFIS) to categorize Android malware accurately. GWO improves efficiency and efficacy in ANFIS training and learning for Android malware feature selection and classification. Our approach categorizes Android malware as a high, moderate, or low hazard. The proposed approach qualitatively assesses risk based on critical features and threats. Our threat-assessing mechanism’s scale categorizes Android malware. The proposed approach resolves the issue of overlapping features in different types of malware. Comparative results with other classifiers show that the ensemble of GWO is effective in the training and learning process of ANFIS and thus achieves 95% F-score, 94% specificity, and 94% accuracy. The ensemble makes fast learning possible and improves classification accuracy.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00242024-09-19T00:00:00.000+00:00Advanced PSO Algorithms Development with Combined lbest and gbest Neighborhood Topologieshttps://sciendo.com/article/10.2478/cait-2024-0025<abstract> <title style='display:none'>Abstract</title> <p>This paper introduces an innovative approach integrating global best (gbest) and local best (lbest) PSO communication topologies. The algorithm initiates with lbest and seamlessly transitions to gbest, with the switching rate controlled by the parameter “a”. Rational values of “a” is determined through numerical experiments. A comparative methodology employing two estimation criteria is used to showcase the improved performance of the modified PSO-based algorithms. Furthermore, the efficacy of this approach is demonstrated in addressing two optimal control problems within dynamical systems. Results highlight the modified algorithms’ superiority in terms of the total number of successful runs and statistical indicators. Consequently, these advanced algorithms prove effective for applications such as artificial neural network training, controller gains determination, and similar problem domains.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00252024-09-19T00:00:00.000+00:00Managing Cybersecurity: Digital Footprint Threatshttps://sciendo.com/article/10.2478/cait-2024-0030<abstract> <title style='display:none'>Abstract</title> <p>Managing cybersecurity and protecting data assets remain top priorities for businesses. Despite this, numerous data breaches persist due to malicious human actions, resulting in significant financial setbacks. However, many cybersecurity strategies overlook invisible or indirect threats within their scope, such as digital footprints. This paper examines the relationship between personality traits and user behavior concerning cybersecurity. The study suggests that human personality can be predicted using innovative techniques based on the digital hints individuals leave on the internet. Consequently, this information can be exploited for malicious actions against entities. As proposed, an effective strategy for improving behaviors and cultivating a security-oriented culture involves continually identifying relevant sources of cyber risks and implementing continuous awareness initiatives.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00302024-09-19T00:00:00.000+00:00Energy-Efficient and Accelerated Resource Allocation in O-RAN Slicing Using Deep Reinforcement Learning and Transfer Learninghttps://sciendo.com/article/10.2478/cait-2024-0029<abstract> <title style='display:none'>Abstract</title> <p>Next Generation Wireless Networks (NGWNs) have two main components: Network Slicing and Open Radio Access Networks (O-RAN). NS is needed to handle various Quality of Services (QoS). O-RAN adopts an open environment for network vendors and Mobile Network Operators (MNOs). In recent years, Deep Reinforcement Learning (DRL) approaches have been proposed to solve some key issues in NGWNs. The primary obstacles preventing the DRL deployment are being slowly converged and unstable. Additionally, these algorithms have enormous carbon emissions that negatively impact climate change. This paper tackles the dynamic allocation problem of O-RAN radio resources for better QoS, faster convergence, stability, lower energy and power consumption, and reduced carbon emissions. Firstly, we develop an agent with a newly designed latency-based reward function and a top-k filtration mechanism for actions. Then, we propose a policy Transfer Learning approach to accelerate agent convergence. We compared our model to another two models.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00292024-09-19T00:00:00.000+00:00Exploring the Efficacy of GenAI in Grading SQL Query Tasks: A Case Studyhttps://sciendo.com/article/10.2478/cait-2024-0027<abstract> <title style='display:none'>Abstract</title> <p>Numerous techniques, including problem-solving, seeking clarification, and creating questions, have been employed to utilize generative Artificial Intelligence (AI) in education. This study investigates the possibility of using Generate AI (GenAI) to grade Structured Query Language (SQL) queries automatically. Three models were used which are ChatGPT, Gemini, and Copilot. The study uses an experimental approach to assess how well the models perform in evaluating student responses by comparing the models’ accuracy with those of human experts. The results showed that despite some inconsistencies, GenAI holds great promise for streamlining. Thus, further research is required in light of inconsistent GenAI performance. If these issues were resolved, GenAI can be utilized in education. However, human oversight and ethical issues must always come first.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00272024-09-19T00:00:00.000+00:00Deep Learning-Driven Workload Prediction and Optimization for Load Balancing in Cloud Computing Environmenthttps://sciendo.com/article/10.2478/cait-2024-0023<abstract> <title style='display:none'>Abstract</title> <p>Cloud computing revolutionizes as a technology that succeeds in serving large-scale user demands. Workload prediction and scheduling tend to be factors dictating cloud performance. Forecasting the future workload in due to avoid unfair resource allocation, emerges to be a crucial inspecting feature for enhanced performance. The aforementioned issues of interest are addressed in our work by soliciting a Deep Learning driven Max-out prediction model, which efficiently forecasts the future workload by providing a balanced approach for enhanced scheduling with the Tasmanian Devil-Bald Eagle Search (TDBES) optimization algorithm. The results obtained proved that the TDBES scored efficacy in makespan with 16.75%, migration cost with 14.78%, and a migration efficiency rate of 9.36% over other existing techniques like DBOA, WACO, and MPSO, with additional error analysis of prediction performance using RMSE, MAP, and MAE, among which our contributed approach overrides traditional methods with least error.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00232024-09-19T00:00:00.000+00:00Real-Time Hand Gesture Recognition: A Comprehensive Review of Techniques, Applications, and Challengeshttps://sciendo.com/article/10.2478/cait-2024-0031<abstract> <title style='display:none'>Abstract</title> <p>Real-time Hand Gesture Recognition (HGR) has emerged as a vital technology in human-computer interaction, offering intuitive and natural ways for users to interact with computer-vision systems. This comprehensive review explores the advancements, challenges, and future directions in real-time HGR. Various HGR-related technologies have also been investigated, including sensors and vision technologies, which are utilized as a preliminary step in acquiring data in HGR systems. This paper discusses different recognition approaches, from traditional handcrafted feature methods to state-of-the-art deep learning techniques. Learning paradigms have been analyzed such as supervised, unsupervised, transfer, and adaptive learning in the context of HGR. A wide range of applications has been covered, from sign language recognition to healthcare and security systems. Despite significant developments in the computer vision domain, challenges remain in areas such as environmental robustness, gesture complexity, computational efficiency, and user adaptability. Lastly, this paper concludes by highlighting potential solutions and future research directions trying to develop more robust, efficient, and user-friendly real-time HGR systems.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00312024-09-19T00:00:00.000+00:00Multi-Level Machine Learning Model to Improve the Effectiveness of Predicting Customers Churn Bankshttps://sciendo.com/article/10.2478/cait-2024-0022<abstract> <title style='display:none'>Abstract</title> <p>This study presents a novel multi-level Stacking model designed to enhance the accuracy of customer churn prediction in the banking sector, a critical aspect for improving customer retention. Our approach integrates four distinct machine-learning algorithms – K-Nearest Neighbor (KNN), XGBoost, Random Forest (RF), and Support Vector Machine (SVM) – at the first level (Level 0). These algorithms generate initial predictions, which are then combined and fed into higher-level models (Level 1) comprising Logistic Regression, Recurrent Neural Network (RNN), and Deep Neural Network (DNN).</p> <p>We evaluated the model through three scenarios: Scenario 1 uses Logistic Regression at Level 1, Scenario 2 employs a Deep Convolutional Neural Network (DNN), and Scenario 3 utilizes a Deep Recurrent Neural Network (RNN). Our experiments on multiple datasets demonstrate significant improvements over traditional methods. In particular, Scenario 1 achieved an accuracy of 91.08%, a ROC-AUC of 98%, and an AUC-PR of 98.15%. Comparisons with existing research further underscore the enhanced performance of our proposed model.</p> </abstract>ARTICLEtruehttps://sciendo.com/article/10.2478/cait-2024-00222024-09-19T00:00:00.000+00:00en-us-1