

For printing equipment manufacturing enterprises, the selection of an enterprise resource planning (ERP) system involves multiple factors, including function matching degree, technical performance, cost-effectiveness, and others. Therefore, ERP system selection becomes a complex multi-criteria decision-making (MCDM) problem. In this study, a method for ERP system selection suitable for printing equipment manufacturing enterprises is proposed. First, a set of evaluation criteria covering five dimensions—functionality, technicality, implementability, economy, and reputation—is established through literature review. Second, the Fuzzy Analytical Hierarchy Process (FAHP) is used to determine the weights of the evaluation criteria. Then, the gray relational analysis (GRA)-enhanced Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method is applied to rank the alternatives. Finally, a dual verification mechanism combining weight perturbation and parameter sensitivity is designed to test the robustness of the ranking results. Experimental results show that Alternative 2 is the best ERP system. Furthermore, key evaluation criteria, such as core functionality coverage and purchase price, are considered to be relatively important. The study demonstrates that this model, through methodological innovation and verification mechanism design, effectively integrates subjective and objective information. It provides a solution that is both scientifically sound and practically applicable for ERP system selection in printing equipment manufacturing enterprises.

Accurate spectral prediction of CMYK printed images is critical for ensuring color reproduction fidelity in modern printing processes. Traditional physical models, such as the Murray-Davies and Yule–Nielsen models, have constraints in capturing the complexity of ink interactions and the contribution of black ink, resulting in decreased spectral prediction accuracy. To address these challenges, a novel multi-output weighted support vector regression (MO-WSVR) model for multispectral reconstruction of CMYK printed images is proposed. By modeling multiple spectral bands cumulatively, MO-WSVR is capable of predicting a range of spectral points simultaneously. Furthermore, the model incorporates a dynamic weighting mechanism that assigns greater weights to spectral points with higher prediction errors, enabling the model to better accommodate the inherent characteristics of CMYK printed images. Experimental results demonstrated that the MO-WSVR model significantly outperformed traditional physical models and existing data-driven prediction methods, based on root mean square error (RMSE) and colorimetric accuracy (CIEDE2000).

The technology of generating tactile data from visual modalities holds significant importance in cutting-edge fields such as tactile rendering, virtual reality, and robotics. This technology effectively bypasses the cumbersome process of manual tactile data collection and overcomes the limitations inherent in physical contact, thereby opening new avenues for advancement in related fields. However, current methods suffer from notable drawbacks: they struggle to ensure consistent and reliable results when generating tactile data across different categories, which greatly restricts their practical applications. To address this challenging problem, the authors have developed the T-CGAN cross-modal generation framework. Based on the FrictGAN architecture, this framework innovatively introduces an image category conditional constraint mechanism and texture feature extraction combined with the L1 loss function to precisely regulate the generation process and ensure high-quality output. Specifically, the framework can generate spectrograms of friction coefficient signals from fabric texture images and then convert these spectrograms into one-dimensional friction coefficient signals using the Griffin–Lim algorithm. During the research, the authors employed root mean square error and mean absolute error metrics to quantitatively analyze the differences among generated spectrograms, reconstructed signals, and their corresponding ground truths and conducted a comprehensive comparison with existing methods. Extensive experimental results demonstrate that this method significantly outperforms existing techniques in terms of both accuracy and stability, providing a superior solution for the field of tactile data generation.

In the realm of audio-driven facial animation, most existing research predominantly focuses on head animations, and there is a scarcity of methods capable of generating full-body videos. The few approaches that can produce full-body videos usually concentrate solely on facial animations, resulting in the prevalent issue of head–body separation. This disjointedness seriously undermines the overall visual coherence and the naturalness of human–computer interaction. To overcome these limitations, the authors introduce SynPoseVAE, an enhanced version of the PoseVAE model. This method innovatively incorporates body-related information. SynPoseVAE effectively acquires detailed human pose data by adopting a bottom-up human pose estimation method to detect human key points and incorporates it into pose prediction, thereby solving the problem of head–body separation. Additionally, we design a new loss function that takes into account both head and body postures. It serves as a crucial regulator, enhancing the coordination between head and body movements. By optimizing based on this loss function, the model can significantly reduce the head–body separation problem, ensuring that the generated animations are more natural and coherent. Experimental results show that SynPoseVAE outperforms traditional methods. It can generate highly coordinated full-body animations, greatly improving the quality of human–computer interaction in the context of voice-driven facial animation synthesis.

High-visibility watermarks with strong visual saliency are particularly vulnerable to removal by deep learning models. Existing adversarial-perturbation-based protection methods for visible watermarks often suffer from poor perturbation stability and significant image quality degradation in high-visibility scenarios. To address these challenges, the authors propose a cascaded coarse-to-fine framework that generates an adversarial perturbation High-visibility Watermark Vaccine (HWV), specifically aimed at protecting high-visibility watermarks. They establish a watermark-driven perturbation generation model and design cascaded loss functions for coarse and fine stages to guide the multi-phase search for a globally optimal adversarial solution. In the coarse stage, a composite loss function is constructed to achieve robust protection of high-visibility watermarks while in the fine stage, a perturbation minimization objective is introduced to mitigate the impact of perturbations on image quality. Moreover, the authors propose a novel gradient normalization equation combined with a dynamic momentum update strategy to adaptively optimize the perturbation step size, accelerating convergence toward the global optimum of the loss function. Experimental results on the CLWD dataset demonstrate that the proposed method effectively prevents removal attacks targeting high-visibility watermarked images. Furthermore, compared to conventional single-stage loss methods, this method significantly improves the image quality of perturbed watermark images, achieving a Peak Signal-to-Noise Ratio greater than 44 dB. This work provides a novel perspective for enhancing digital image copyright protection against deep-learning-based attacks.

Knowledge graphs play a critical role in intelligent systems, but they face persistent challenges of incomplete data acquisition, noisy information, and inefficient inference under dynamic updates. To address these issues, the authors propose a graph-embedding-based framework that integrates three novel components: (1) a neighborhood-enhanced embedding module that captures richer structural semantics, (2) an inference optimization mechanism based on contextual consistency and confidence reweighting, and (3) a dynamic update strategy for efficient incremental learning. Extensive experiments on FB15k-237, WN18RR, and MedKG show clear improvements over state-of-the-art baselines. The proposed framework achieves Mean Reciprocal Rank gains of 8–15% and Hits@10 gains of 3–6%, demonstrating substantial accuracy improvements in link prediction. On dynamic update tasks, the proposed method maintains almost identical accuracy to full retraining (AUC difference < 0.2%) while achieving a 7.7-fold reduction in update time. These results verify that the proposed framework significantly enhances both the effectiveness and efficiency of knowledge graph reasoning.

High dynamic range (HDR) imaging techniques effectively enhance image quality in driving scenarios. However, HDR image synthesis in automotive cameras remains challenging due to complex conditions such as high-contrast scenes, low-light environments, and vehicle motion. Automotive cameras typically optimize image quality by adjusting exposure time to control the duration of light capture and tuning analog gain to amplify the sensitivity of the imaging sensor. The study aims to identify the optimal parameter combinations of exposure times and analog gain settings for HDR image synthesis in automotive cameras. The authors acquire images at three frames captured under different exposure time and analog gain combinations and use HDR image synthesis techniques to simulate HDR images. A composite quality evaluation method was established based on four dimensions: tone range, tone levels, contrast, and signal-to-noise ratio. Quantitative analysis revealed that the best HDR image quality was achieved when the exposure time ranged from 5 ms to 20 ms and the analog gain ranged from 1× to 2×. The proposed HDR image synthesis strategy demonstrates significant practical value for automotive vision systems, improving image quality and providing more accurate and reliable visual information for autonomous driving and advanced driver-assistance systems, enhancing driving safety and user experience.

This paper presents an adaptive method for extracting fabric pattern templates, focusing on efficiently and accurately digitizing textural features in traditional fabric images. Using Segment Anything Model 2 (SAM2) automatic mask generation as the core, this study precisely segments color blocks in fabric images, providing high-quality data for further processing. The method employs a multistep strategy. First, color quantization and bilateral filtering reduce image complexity, remove noise, and enhance edges. Second, advanced edge detection algorithms identify prominent edges to assist SAM2 segmentation, ensuring accuracy and reliability. Finally, masks generated by SAM2 are classified and merged based on their covered colors in the original images, producing clear pattern templates. This method is validated with numerous real fabric images, and it shows strong adaptability and efficiency in extracting color templates. It provides robust support for digital preservation of traditional fabric patterns and opens up opportunities for innovative applications and heritage development, marking a significant advancement in this field.

With society rapidly aging, the demand for mHealth products among elderly users is increasing. However, most current research focuses on functional design and lacks in-depth analysis of the emotional needs and usage habits of older adults. Using the A-Kano model and the three-level emotional design theory, this study gathered and analyzed the functional needs, emotional needs, and usage habits of elderly users through semi-structured interviews and A-Kano questionnaires. The results reveal that elderly users’ needs fall into instinctive, behavioral, and reflective categories. At the instinctive level, a simple interface layout (A1) and bright interface colors (A2) are essential. At the behavioral level, emergency assistance (B5) and voice interaction (B8) are Must-be needs while online consultation (B1) and appointment registration (B2) are One-dimensional needs. At the reflective level, health reminders (C2) are crucial, and community interaction (C1) is a One-dimensional need. Based on these findings, this study proposes targeted interface design strategies. The effectiveness of these strategies was verified through user testing, with elderly users rating Functionality, Visualization, Emotionality, and Interactivity highly. This study not only addresses the shortcomings of existing research regarding the emotional needs and personalized design for elderly users but also provides comprehensive theoretical support and practical guidance for designing mHealth apps for elderly users. Future research could further increase sample diversity, incorporate multiple research methods, and explore the long-term impacts to improve product generalizability and user satisfaction.