Anantharaman Janakiraman and Behnaz Ghoraan, Department of Electrical Engineering and Computer Science, Florida Atlantic University
Text summarization is crucial for mitigating information overload across domains. This research evaluates summarization performance across 17 large language models using seven diverse datasets at three output lengths (50, 100, 150 tokens). We employ a novel multi-dimensional framework assessing factual consistency, semantic similarity, lexical overlap, and human-like evaluation while considering both quality and efficiency factors. Key findings reveal significant differences between models, with specific models excelling in factual accuracy (deepseek-v3), human-like quality (claude-3-5-sonnet), processing efficiency (gemini-1.5-flash), and cost effectiveness (gemini-1.5-flash). Performance varies dramatically by dataset, with models struggling on technical domains but performing well on conversational content. We identified a critical tension between factual consistency (best at 50 tokens) and perceived quality (best at
Text Summarization, Large Language Models, Multi-dimensional Evaluation, Evaluation Metrics, Model Comparison
Mr. Mala Raja Sekhar, Mr.S.Sumit2 Dr.B.Thanikaivel, Hindustan Institute of Technology and Sciences, India
Modern computer technologies made Health care system to doorsteps by monitor and diagnosis patients remotely. In existing, remote monitoring and diagnosis process are unreliable, the data collection process is complex and unable to provide immediate response to patient request. In this paper, the proposed IOT hub cloud Healthcare system enhanced the remote health monitoring and diagnosis process by sensing the real data using smart devices and storing it is Cloud based technologies such as Azure Cosmo DB storage for easy mapping and diagnosis of patients and monitoring process Carried in Azure data Bricks for immediate notification Changed in health conditions. The proposed system uses Smart devices sensors such as Accelerometer, Gyroscope, Magnetometer, Latitude-Longitude and Battery Status to collect patient data in anytime anywhere as telemetry dataset. The Accelerometer (x, y, z) detects body movements, posture, step count, falls, and activity intensity. Further, it helps identify whether a person is walking, running, sitting, lying down, or has experienced a sudden fall. The Gyroscope (x, y, z) tracks angular rotation (wrist movements, hand tremors) which is Used for detecting neurological conditions (e.g., Parkinson’s tremors) or monitoring exercise form during rehabilitation. The Magnetometer (x, y, z) provides orientation and direction. The combination of Magnetometer with accelerometer and gyroscope gives accurate motion tracking. The telemetry dataset collected is analyzed using Activity Based Key Value Mapping algorithm. The proposed IOT hub cloud Healthcare system in this paper reduces the round-trip time and improves the data processing without data lose where the existing system has high latency, slow data processing and data lose in the transmission channel. Application of the proposed system are Activity & Fitness Monitoring, Fall Detection & Emergency Alerts, Sleep Pattern Monitoring, Neurological Disorder Monitoring, Remote Patient Tracking, Vital Correlation with Movement.
Cloud Computing, IoT Hub, Healthcare, Data Processing, Artificial Intelligence.
Nurettin Selcuk Senol1, Amar Rasheed1, Ahmet Aydogan2, and Semih Cal2, 1San Jostate University and Sam Houston State University, University of North, 2Carolina Wilmington, Otterbein University
As Low-Power Wide-Area Networks (LP-WANs) continue to establish a strong presence in Internet of Things (IoT) environments, providing secure communication becomes increasingly important. For instance, LoRa (Long Range) is a popular low-energy and long-range technology solution, but has no security features which opens the system up to a variety of attacks including eavesdropping and packet injection. This study presents a comparative analysis of three symmetric encryption techniques (AES (Ad- vanced Encryption Standard), ChaCha20, and XOR-based encryption) when deployed over a LoRa-based wireless network. The analysis was carried out in a experimental testbed constructed with Arduino MKR- WAN 1310 devices, and a passive LilyGO LoRa32 sniffer, consisting of descriptive experiments measuring packet losses, duplication, and time behavior through 100 transmitted packets. Our results indicate that while AES is able to provide robust security, more packet loss (15%) is observed along with latency varia-tion from computational demands. Comparatively, ChaCha20 and XOR did not experience any packet loss and displayed markedly stable timing, but resulted in increased packets being identified (16% duplication). Our results highlight the compromises of security and performance and provide practical implications for induced encryption choices of secure and reliable low-rate LoRa communications.
LoRa Security, Symmetric Encryption Algorithms, IoT Communication Reliability, Wireless Packet Analysis.
David Kuhlen1 and Andreas Speck2, 1Waterloohain 9, 22769 Hamburg, Germany, 2Hermann-Rodewald-Strare 3, 24118 Kiel, Germany
The knapsack problem is a classical optimization problem in computer science that can be efficiently solved using an algorithm based on dynamic programming. This study investigates the impact of technical factors such as the number of elements, the size of the knapsack, or the sorting of ele ments on the performance of the algorithm. The investigation is based on a simulation experiment. The analysis of the simulation experiment data shows that as the total number of elements and the knapsack capacity increase, the algorithm’s runtime also increases. In this context, an increase in knapsack capacity has a more pronounced negative effect on the algorithm’s runtime. An exami nation of sorting strategies further reveals that a prior sorting of elements by value in descending order can lead to significant deteriorations in runtime.
Dynamic Programming, Knapsack Problem, Performance.
Eta, U. G1, Adepoju A. O1, Omonori, A. A1, and Eniowo, O. D 2 1Federal University of Technology, Nigeria,2 University of Johannesburg, South Africa
The cocoa processing industry in Nigeria is not very competitive because of operational inefficiencies, high costs, and problems with sustainability. This means that we need to look into what affects operational performance in this industry. This study sought to analyze the correlation between employees material handling practices and operational performance in cocoa processing factories in Ondo State. The research employed a quantitative framework utilizing a cross-sectional survey methodology. The target population consisted of 393 employees from two prominent cocoa processing factories in the research area. Using Krejcie and Morgans (1970) sampling table, a sample size of 196 was chosen. Partial Least Squares Structural Equation Modeling (PLS-SEM) to look at the data. The results indicated that material handling practices have a big and statistically significant positive effect on operational performance (β = 0.877, T = 46.503, p < 0.001, R² = 0.769, f² = 3.328). The study demonstrates that material handling practices is essential behavioral assets that influences efficiency, productivity, quality, and innovation in cocoa processing facilities. Thus, it suggests that managers concentrate on minimizing handling errors via training on lean methodologies to enhance operational efficiencies.
material handling practices, operational performance, cocoa processing factories, partial least square structural equation model.
Saeed Bakhshan and Maedeh Yahaghi, Wayne State University, USA
A rainbow matching in an edge-colored graph is defined as a matching in which all edges have distinct colors. The maximum cardinality rainbow matching problem seeks to determine the largest rainbow matching in a graph. It is a fundamental problem in graph theory with numerous applications. In this paper, we present the first parallel approximation algorithm for rainbow matching in edge-colored bipartite graphs. This algorithm achieves a 1/3-approximation ratio, matching the classical guarantee for greedy maximal rainbow matchings (and, more generally, for 3-dimensional matching). Specifically, for any symmetric edge-colored bipartite graph with n vertices, m edges, and q colors, the proposed algorithm computes a maximal rainbow matching of size at least one third of the optimal matching in O(n) time on a PRAM with n 2 processors. Additionally, we present a deterministic sequential version of the algorithm, which computes a 1/3-approximation in O(m + n log n) time, mirroring the approximation ratio of the parallel algorithm. We provide a practical OpenMP implementation of the proposed parallel algorithm on a multi-core system and perform an extensive performance evaluation. The experimental results show that for large graphs with hundreds of millions of edges, the parallel algorithm achieves a considerable speedup relative to its sequential counterpart of up to 5.908 when using 32 cores. Rainbow matchings naturally arise in data-intensive and machine learning workflows where we must select large sets of pairwise non-conflicting interactions (e.g., user–item, data–worker, or job–machine assignments) under diversity or fairness constraints represented as edge colors. The proposed sequential and parallel algorithms can therefore serve as scalable combinatorial primitives for tasks such as diverse mini-batch or coreset construction, fair and diverse recommendation, and resource allocation in distributed training systems.
Rainbow matching , approximation algorithms , parallel algorithms , machine learning systems , graph-based learning.
Maedeh Yahaghi and Saeed Bakhshan, Wayne State University, U.S.
In this paper we present a parallel algorithm for coloring 3-colorable graphs, based on Wigderson’s classical algorithm which guarantees a coloring using at most O( √n)colors. To parallelize the approach, we integrate Luby and Gebremedhin-Manne (GM) algorithm for parallel coloring. Experimental evaluations demonstrate that our parallel algorithm achieves speedup to 16 threads.
Graph Coloring , Parallel Algorithm , Machine Learning
Jose Joaquan Vargas Altalaguerri, Institut El Palau, Spain
Modern multimodal perception systems excel at extracting semantic information but frequently admit logically impossible states caused by sensor noise and neural hallucinations. In safety-critical scenarios, ground truth is fundamentally inaccessible; therefore, consistency—not truth—is the only verifiable anchor for decision-making under uncertainty. We introduce SRP (Spatiotemporal Relational Primitives), an architectural framework that enforces an Epistemic Membrane between probabilistic extraction (LLMs/VLMs) and deterministic validation (FOL constraints). SRP defines seven qualitative primitives that encode the minimal physical and relational structure required to validate multimodal world states. We demonstrate: (1) deterministic FOL validation achieving 106× compute reduction compared to semantic re-validation baselines (Fig. 3); (2) strict segregation between verified facts and abductive inference (Gcore vs. Gshadow, Fig. 2); (3) a Sim2Real workflow where qualitative rules transfer perfectly while quantitative parameters are calibrated via Shadow Mode; and (4) validation across 125,079 real-world data items including 120,000 driving images (BDD100K), 105 construction safety scenarios, 4,969 traffic signs, and 36 cyber-physical attacks (SWaT). All 64 integration tests achieve 100% success with zero mocks and zero synthetic data. In the SWaT water-treatment testbed, SRP detects 95% of attacks using physicsderived constraints only, with zero attack signatures. Results indicate that constraint-driven consistency provides a portable, auditable, and computationally trivial safety layer suitable for edge and MCU deployment.
Neuro-symbolic AI, Knowledge Graphs, Safety-Critical Systems, Sim2Real Transfer, Embedded Reasoning, Constraint-Based Validation.
S. Ceca Mijatovic1, Shahram Sarkani1, and Thomas Mazzuchi2 1Engineering Management and Systems Engineering, The George Washington,2University, Washington D.C., USA
Distribution of Internet-of-Things sensors in wireless sensor networks (WSNs) often leads to transmission conflicts and inefficiency of energy utilization, resulting in decreased sensor communication and incomplete data for decision making. Utilizing hexagonal topology and its properties such as one distance-to-neighbor, one distance-to-cluster, and three-axis coordinates can be exploited for energy efficient optimization. Leveraging a network optimization model created in AMPL with network simulation created in Contiki-NG Cooja, this research demonstrates that WSNs with hexagonal network topology can benefit from clustering which improves network lifetime, and therefore, enhances WSNs reliability by reducing total network energy consumption. Additionally, dynamic clustering further improves network lifetime for the WSN where the cluster-member hopping energy cost and cluster-head transmission energy cost ratio is 33.5% or less.
Wireless Sensor Network, Hexagonal topology, Cluster, Optimization, Network lifetime.