Subject: Animation and exchanges in the Petri Nets community
List archive
[PN-world] (PN) [qest-announce] CFP - TOMPECS Special Issue on Performance Evaluation of Federated Learning Systems
Chronological Thread
- From: Marco Paolieri <address@concealed>
- To: address@concealed
- Subject: [PN-world] (PN) [qest-announce] CFP - TOMPECS Special Issue on Performance Evaluation of Federated Learning Systems
- Date: Tue, 26 Mar 2024 14:21:57 -0700
- Arc-authentication-results: i=3; mx.google.com; dkim=pass address@concealed header.s=pps23mar2020 header.b=aPNTVVxC; arc=pass (i=1 spf=pass spfdomain=usc.edu dkim=pass dkdomain=usc.edu dmarc=pass fromdomain=usc.edu); spf=pass (google.com: domain of address@concealed designates 209.85.220.41 as permitted sender) address@concealed; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=usc.edu
- Arc-authentication-results: i=2; mx.google.com; dkim=pass address@concealed header.s=pps23mar2020 header.b=aPNTVVxC; arc=pass (i=1 spf=pass spfdomain=usc.edu dkim=pass dkdomain=usc.edu dmarc=pass fromdomain=usc.edu); spf=pass (google.com: domain of address@concealed designates 209.85.220.41 as permitted sender) address@concealed; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=usc.edu
- Arc-authentication-results: i=1; gmr-mx.google.com; dkim=pass address@concealed header.s=pps23mar2020 header.b=aPNTVVxC; spf=pass (google.com: domain of address@concealed designates 67.231.157.15 as permitted sender) address@concealed; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=usc.edu
- Arc-message-signature: i=3; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:to:subject:message-id:date:from :mime-version:sender:dkim-signature; bh=icDSYmoCpGYjWGrlOo91JubrR2zrzYwMghX9pQOpaXI=; fh=yCVp0sMzou0vAvq3JRvN/u8HRFRSbhPOBmidYgIeXwc=; b=QQXGYyoYQEYSr2V+n9QO4M6WKytngMwhC34OdVdn7OAsJnJrM70oT2CAgb+piZIZ+T aAUADjZPEzBthOklCpE0PTZ2+uQUxl+FcVE36K3kFhvflKJ5j5hNIPqVmLTau65oj+t5 MsBqxsOxhj15gbT39Yq8SAQr/ySGfwnhpkfbDRZ4+Lirs0fZOL4b0ayn2BgLzVQ+9Uzw HedpQhwVCQ4HycpcAsCytFHKD+CasoBgDyVXLWEguaSvOtEgf7xPf8GLnvwalMUcLiw9 wXd4WLbc5gVISAGFqZXrx9xjjx56th9S/aHIr9T6hLfrDjPG/micAPtzS+eZYZkE67U5 QncQ==; darn=informatik.uni-hamburg.de
- Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=to:subject:message-id:date:from:mime-version:dkim-signature; bh=2bDz16TY+m+rG6W1YZ+PH7Kq82deUObF4H+gdAd5V3U=; fh=X7pUMquMJXtfQSeUectt0n2OgguWI471zfbidWh6R7E=; b=HkcPHLwKcr38IMoNgsYcRcsgLUwBym+5Xx8/JgyB4UufqQmFIBnd0QIZ+Q1pTSfZAd EHToKuxxz2eI0tHOCMwEPiv2iJjRki6SPs7NXnMLaRe8fXGJFe0qoTfwcUftuSr+f0Rp Y0pe4+evVq9V+uUxtMQZmPjkCpERiEAwipK2FOnvcP+cZZ/RhEkIY2TVC3k0yufMMGYR 6VoHLBY1oFP/WhptCH00iuk/MaQLybqemA9V8DhPeti8KZm4bqZkagQY50CYNF1NjmWk 6Z+6hGRtfjEHW6N6R8XjaSW2poGrAFh7G9T0cquR0qk5PkcgZdyojVf722N9yjDjy3yE IpxQ==; dara=google.com
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=to:subject:message-id:date:from:mime-version:dkim-signature; bh=2bDz16TY+m+rG6W1YZ+PH7Kq82deUObF4H+gdAd5V3U=; fh=JiYYTjfmbvmpIYqVvy8RJt+QyB2Qooz0bmQj6Tp6aG8=; b=oY6/bT+b6X4/rMkipcl1ieCJ5gp85DTU5Stg8KH7iYjSacYcgZk+CmhDl4qlVak7rk qxkJCW4Pr6r23Gfx/EqC+82pFKmKd6p4d2iCrIC3aVR3iKFTHucKVJN3hrEvCX3AyzdI 8KSeHTdgRXw0GMq7v+j5ypQvJxQaSfb3qPBtvLIHBS8CJw4OqPGlqYqOqVdnNv0roRsw T96/vIvSvaKO9VEFdIdq4zecxgVrmjmITqPHkkmXwxS+1FfcwM4KO+M1alzjaATIHVQ2 CbhLh7jmzP7Y+o9uBvZSEIvztrD/GgAxhIWtjiodjh7vTj/UMH2UAEMexzRy0ZGoAcOj t1YQ==; dara=google.com
- Arc-seal: i=3; a=rsa-sha256; t=1711488444; cv=pass; d=google.com; s=arc-20160816; b=eYHKQhlNkdlsSiz2Eb19ZlCp94FtFoPbtvw7D4vRsv+LUmpKDSwfPzj8DMuF6BjtXd KnTA7JPuDPyGSFUbVxIdSKeEXTq5uVyxhKb8leUZHhkEzjegYrj/3TyWHbFsfQsEhtsr ULyjxw3zkzvTdJipvMdsm99634/+YJ5eaBrMcX0MSsAvuYoXOV0b4EzuPKyTSLjbPA0L x22UxIElFG379PSOFxhmFIBJiUGDo1CBcw4v3E42IJBLlyzQGRK9dQ7dFea8Z2JSBYIa aXn7k2ZA2ANkEF3yLMKIM2qtgAuIQp5HJxrKTr7FhdkiIyyILSv/YaI8iqKbkCNXibvn PTEQ==
- Arc-seal: i=2; a=rsa-sha256; t=1711488134; cv=pass; d=google.com; s=arc-20160816; b=MS3VxqZOVi0TXZbcF5StvX75vkJNP6oVUmRyAyNh2gzttioWL6kV5MsAt4KIvZr+Fi n21qve4ZnFdh5bW8vwBeRqNhhbbGDvlaqErYZJxHCgHzKMPdexQGZcYnMpyexrBs0D9a kqFJzNkehO9NtOhR0QDrPsGZjZcCvq3sfKteXJW2hA/3maGn1mGMrodDqqdrr4iyEUgx 4vy9f6gIQCPAVIVIM28jKQ1c/UirWNqxMcoex0wUhhnZONHSo7ZA93DtJ+oATA5zzvx2 82cDQtEW8jyCcaJU2i/NvkdQRENapGnAI7zajEkz0kQEzc7NI9yKc6k89qeEZvfPOwpU nRmw==
- Arc-seal: i=1; a=rsa-sha256; t=1711488134; cv=none; d=google.com; s=arc-20160816; b=q9Ck2COhY+C8MTWcaANTBi47QFordmEBFZlqkxZv4gHvp9Ym21Mo2Wm93DWg77wWJO UqGIqB0xXE8PMWd7lk0yNRjkjHzVSeUFzWB6XpVCCHTVjuZEEWD+nPxz6NUuPg6GHNsy ++85DWdYCNrkAV13ZL7vU7AYNzi1EDm8Ul0ubUnqDRJe3aJl8GY8r4hcrWWSYMrZOTlL f0g+vMY7RmhRtjN0angt4m2VKEQ6jko2GSoTF8oSayUz2KhtvlJnwxNgGVNR4mDsziL4 HSGMqz2EuzutQ+Z2s6AX6ySsS23rxH+R7AYuFLxA0YKOD3qUQ4+NNtqu0HATxAPu55dr dGOg==
- Authentication-results: mx04.rrz.uni-hamburg.de (amavisd-new); dkim=pass (2048-bit key) header.d=unifi.it
- Authentication-results: spool.mail.gandi.net; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=usc.edu (policy=none); spf=pass (spool.mail.gandi.net: domain of address@concealed designates 134.100.38.114 as permitted sender) address@concealed
- List-archive: <https://groups.google.com/a/unifi.it/group/qest-announce-group/>
- List-id: <qest-announce-group.unifi.it>
- Mailing-list: list address@concealed; contact address@concealed
============================================
ACM TOMPECS (Transactions on Modeling and
Performance Evaluation of Computing Systems)
============================================
Special Issue:
Performance Evaluation of Federated Learning Systems
https://dl.acm.org/journal/tompecs/calls-for-papers
MOTIVATION
==========
Federated learning has recently emerged as a trendy privacy-preserving
approach for training machine learning models on data that is
scattered across multiple heterogeneous devices/clients. In federated
learning, clients iteratively compute updates to the machine learning
models on their local datasets. These updates are periodically
aggregated across clients, typically but not always with the help of a
central parameter server.
In many real-world applications of federated learning such as
connected-and-autonomous vehicles (CAVs), the underlying
distributed/decentralized systems on which federated learning
algorithms are executing suffer a wide degree of heterogeneity
including but not limited to data distributions, computation speeds,
and external local environments. Moreover, the clients in federated
learning systems are often resource-constrained edge or end devices
and may compete for common resources such as communication bandwidth.
Many federated learning algorithms have been proposed and analyzed
experimentally and theoretically, yet these only cover a limited range
of heterogeneity. In addition, running federated learning in
resource-constrained settings often presents complex and not
well-understood tradeoffs among various performance metrics, including
final accuracy, convergence rate, and resource consumption.
TOPICS
======
This special issue will focus on the performance evaluation of
federated learning systems. We solicit papers that include theoretical
models or numerical analysis of federated learning performance, as
well as system-oriented papers that evaluate implementations of
federated learning systems. Specific topics of interest include, but
are not limited to:
- Novel techniques for analyzing the convergence of federated
learning algorithms
- Performance analysis of emerging federated learning paradigms, e.g.,
personalized models, asynchronous learning, cache-enhanced learning
- Analysis of performance tradeoffs in federated learning systems
- Active client selection in federated learning
- Fairness metrics for federated learning systems
- Novel federated learning algorithms that aim to address system
heterogeneity or other practical implementation challenges, e.g.,
dynamic client availability
- Benchmark platforms that enable evaluation of multiple federated
learning algorithms
- New federated learning algorithms or analysis frameworks motivated
by specific applications, e.g., large language models or
recommendation systems
- Experimental results from large-scale federated learning deployments
IMPORTANT DATES
===============
- Submissions deadline: April 22, 2024
- First-round review decisions: June 30, 2024
- Deadline for revision submissions: August 31, 2024
- Notification of final decisions: October 15, 2024
- Tentative publication: December 1, 2024
SUBMISSION INFORMATION
======================
Submissions should follow the standard ACM TOMPECS formatting
requirements:
https://dl.acm.org/journal/tompecs/author-guidelines#submission
We will use Manuscript Central (https://mc.manuscriptcentral.com/tompecs)
to handle submissions.
GUEST EDITORS
=============
- Carlee Joe-Wong, Carnegie Mellon University
address@concealed
- Lili Su, Northeastern University
address@concealed
ACM TOMPECS (Transactions on Modeling and
Performance Evaluation of Computing Systems)
============================================
Special Issue:
Performance Evaluation of Federated Learning Systems
https://dl.acm.org/journal/tompecs/calls-for-papers
MOTIVATION
==========
Federated learning has recently emerged as a trendy privacy-preserving
approach for training machine learning models on data that is
scattered across multiple heterogeneous devices/clients. In federated
learning, clients iteratively compute updates to the machine learning
models on their local datasets. These updates are periodically
aggregated across clients, typically but not always with the help of a
central parameter server.
In many real-world applications of federated learning such as
connected-and-autonomous vehicles (CAVs), the underlying
distributed/decentralized systems on which federated learning
algorithms are executing suffer a wide degree of heterogeneity
including but not limited to data distributions, computation speeds,
and external local environments. Moreover, the clients in federated
learning systems are often resource-constrained edge or end devices
and may compete for common resources such as communication bandwidth.
Many federated learning algorithms have been proposed and analyzed
experimentally and theoretically, yet these only cover a limited range
of heterogeneity. In addition, running federated learning in
resource-constrained settings often presents complex and not
well-understood tradeoffs among various performance metrics, including
final accuracy, convergence rate, and resource consumption.
TOPICS
======
This special issue will focus on the performance evaluation of
federated learning systems. We solicit papers that include theoretical
models or numerical analysis of federated learning performance, as
well as system-oriented papers that evaluate implementations of
federated learning systems. Specific topics of interest include, but
are not limited to:
- Novel techniques for analyzing the convergence of federated
learning algorithms
- Performance analysis of emerging federated learning paradigms, e.g.,
personalized models, asynchronous learning, cache-enhanced learning
- Analysis of performance tradeoffs in federated learning systems
- Active client selection in federated learning
- Fairness metrics for federated learning systems
- Novel federated learning algorithms that aim to address system
heterogeneity or other practical implementation challenges, e.g.,
dynamic client availability
- Benchmark platforms that enable evaluation of multiple federated
learning algorithms
- New federated learning algorithms or analysis frameworks motivated
by specific applications, e.g., large language models or
recommendation systems
- Experimental results from large-scale federated learning deployments
IMPORTANT DATES
===============
- Submissions deadline: April 22, 2024
- First-round review decisions: June 30, 2024
- Deadline for revision submissions: August 31, 2024
- Notification of final decisions: October 15, 2024
- Tentative publication: December 1, 2024
SUBMISSION INFORMATION
======================
Submissions should follow the standard ACM TOMPECS formatting
requirements:
https://dl.acm.org/journal/tompecs/author-guidelines#submission
We will use Manuscript Central (https://mc.manuscriptcentral.com/tompecs)
to handle submissions.
GUEST EDITORS
=============
- Carlee Joe-Wong, Carnegie Mellon University
address@concealed
- Lili Su, Northeastern University
address@concealed
To unsubscribe from this group and stop receiving emails from it, send an email to address@concealed.
- [PN-world] (PN) [qest-announce] CFP - TOMPECS Special Issue on Performance Evaluation of Federated Learning Systems, Marco Paolieri, 03/28/2024
Archive powered by MHonArc 2.6.19+.