Refine
Document Type
- Article (3)
- ZIB-Report (3)
- Other (1)
Language
- English (7)
Is part of the Bibliography
- no (7)
Keywords
Clustering scientific publications: lessons learned through experiments with a real citation network
(2025)
Clustering scientific publications helps uncover research structures within bibliographic databases. Graph-based methods such as spectral, Louvain, and Leiden clustering are commonly used due to their ability to model citation networks. However, their effectiveness can diminish when applied to real-world data. This study evaluates these clustering algorithms on a citation graph of about 700,000 articles and 4.6 million citations from the Web of Science. The results show that while scalable methods like Louvain and Leiden perform efficiently, their default settings often yield poor partitioning. Meaningful outcomes require careful parameter tuning, especially for large networks with uneven structures, including a dense core and loosely connected papers. These findings highlight practical lessons about the challenges of large-scale data, method selection and tuning based on specific structures of bibliometric clustering tasks.
Not always! This is our answer to the question of whether the Polyak adaptive stepsize rule in the gradient projection method is optimal. The answer is based on revisiting the subgradient projection method by Polyak [USSR Computational Mathematics and Mathematical Physics 9 (1969)] for smooth and convex minimization problems where the objective function possesses a geometric property called flatness. Our results show that the method can be more flexible (the effective range for the parameter controlling the stepsize can be wider) and have sharper convergence rates. Applications to split feasibility/equality problems are presented, deriving for the first time the O(1/k) rate of convergence for the adaptive CQ method. A theoretical guarantee of the linear convergence of the gradient descent method with adaptive stepsizes for Google PageRank is provided. At the same time, numerical experiments are designed to spot the ``optimal" stepsize and to compare with other basic gradient methods.
In this paper, for the first time in the literature, we study the stability of solutions of two classes of feasibility (i.e., split equality and split feasibility) problems by set-valued and variational analysis techniques. Our idea is to equivalently reformulate the feasibility problems as parametric generalized equations to which set-valued and variational analysis techniques apply. Sufficient conditions, as well as necessary conditions, for the Lipschitz-likeness of the involved solution maps are proved by exploiting special structures of the problems and by using an advanced result of B.S. Mordukhovich [J. Global Optim. 28, 347–362 (2004)]. These conditions stand on a solid interaction among all the input data by means of their dual counterparts, which are transposes of matrices and regular/limiting normal cones to sets. Several examples are presented to illustrate how the obtained results work in practice and also show that the assumption on the existence of a nonzero solution used in the necessity conditions cannot be lifted.
Fuzzy clustering, which allows an article to belong to multiple clusters with soft membership degrees, plays a vital role in analyzing publication data. This problem can be formulated as a constrained optimization model, where the goal is to minimize the discrepancy between the similarity observed from data and the similarity derived from a predicted distribution. While this approach benefits from leveraging state-of-the-art optimization algorithms, tailoring them to work with real, massive databases like OpenAlex or Web of Science - containing about 70 million articles and a billion citations - poses significant challenges. We analyze potentials and challenges of the approach from both mathematical and computational perspectives. Among other things, second-order optimality conditions are established, providing new theoretical insights, and practical solution methods are proposed by exploiting the structure of the problem. Specifically, we accelerate the gradient projection method using GPU-based parallel computing to efficiently handle large-scale data.
By applying some techniques of set-valued and variational analysis, we study solution stability of nonhomogeneous split equality problems and nonhomogeneous split feasibility problems, where the constraint sets need not be convex. Necessary and sufficient conditions for the Lipschitz-likeness of the solution maps of the problems are given and illustrated by concrete examples. The obtained results complement those given in [Huong VT, Xu HK, Yen ND. Stability analysis of split equality and split feasibility problems. arXiv:2410.16856.], where classical split equality problems and split feasibility problems have been considered.
Fuzzy clustering, which allows an article to belong to multiple clusters with soft membership degrees, plays a vital role in analyzing publication data. This problem can be formulated as a constrained optimization model, where the goal is to minimize the discrepancy between the similarity observed from data and the similarity derived from a predicted distribution. While this approach benefits from leveraging state-of-the-art optimization algorithms, tailoring them to work with real, massive databases like OpenAlex or Web of Science -- containing about 70 million articles and a billion citations -- poses significant challenges. We analyze potentials and challenges of the approach from both mathematical and computational perspectives. Among other things, second-order optimality conditions are established, providing new theoretical insights, and practical solution methods are proposed by exploiting the problem’s structure. Specifically, we accelerate the gradient projection method using GPU-based parallel computing to efficiently handle large-scale data.
We propose an unsupervised classification approach to large-scale text-based datasets using Large Language Models. Large text data sets, such as publications, websites, and other text-based media, inherit two distinct types of features: (1) the text itself, its information conveyed through semantics, and (2) its relationship to other texts through links, references, or shared attributes. While the latter can be described as a graph structure, enabling us to use tools and methods from graph theory as well as conventional classification methods, the former has newly found potential through the usage of LLM embedding models.
Demonstrating these possibilities and their practicability, we investigate the Web of Science dataset, containing ~56 million scientific publications through the lens of our proposed embedding method, revealing a self-structured landscape of texts. Further, we discuss strategies to combine these emerging methods with traditional graph-based approaches, potentially compensating each other's shortcomings.