The Case for the Anonymization of Offloaded Computation
Computation offloading (often to external computing resources over a network) has become a necessity for modern applications. At the same time, the proliferation of machine learning techniques has empowered malicious actors to use such techniques in order to breach the privacy of the execution process for offloaded computations. This can enable malicious actors to identify offloaded computations and infer their nature based on computation characteristics that they may have access to even if they do not have direct access to the computation code. In this paper, we first demonstrate that even non-sophisticated machine learning algorithms can accurately identify offloaded computations. We then explore the design space of anonymizing offloaded computations through the realization of a framework, called Camouflage. Camouflage features practical mechanisms to conceal characteristics related to the execution of computations, which can be used by malicious actors to identify computations and orchestrate further attacks based on identified computations. Our evaluation demonstrated that Camouflage can impede the ability of malicious actors to identify executed computations by up to 60%, while incurring modest overheads for the anonymization of computations.
Updated: 2023-05-12 23:49:06
标题: 《支持卸载计算的匿名化的案例》
摘要: 计算卸载(通常是通过网络连接到外部计算资源)已经成为现代应用程序的必要条件。与此同时,机器学习技术的普及使恶意行为者能够利用这些技术来侵犯卸载计算过程的隐私。这可能使恶意行为者能够识别卸载的计算并根据他们可能访问的计算特征推断其性质,即使他们没有直接访问计算代码。在本文中,我们首先展示即使是非复杂的机器学习算法也能准确识别卸载的计算。然后,通过一个名为Camouflage的框架,我们探索了匿名化卸载计算的设计空间。Camouflage具有实用机制,可以隐藏与计算执行相关的特征,这些特征可以被恶意行为者用来识别计算并根据识别的计算进行进一步攻击。我们的评估表明,Camouflage可以阻止恶意行为者识别执行的计算能力高达60%,同时产生适度的计算匿名化开销。
更新时间: 2023-05-12 23:49:06
领域: cs.CR,cs.DC
Digital Forensics in the Age of Smart Environments: A Survey of Recent Advancements and Challenges
Digital forensics in smart environments is an emerging field that deals with the investigation and analysis of digital evidence in smart devices and environments. As smart environments continue to evolve, digital forensic investigators face new challenges in retrieving, preserving, and analyzing digital evidence. At the same time, recent advancements in digital forensic tools and techniques offer promising solutions to overcome these challenges. In this survey, we examine recent advancements and challenges in digital forensics within smart environments. Specifically, we review the current state-of-the-art techniques and tools for digital forensics in smart environments and discuss their strengths and limitations. We also identify the major challenges that digital forensic investigators face in smart environments and propose potential solutions to overcome these challenges. Our survey provides a comprehensive overview of recent advancements and challenges in digital forensics in the age of smart environments, and aims to inform future research in this area.
Updated: 2023-05-12 22:23:59
标题: 智能环境时代的数字取证:最新进展和挑战的调查
摘要: 智能环境中的数字取证是一个新兴领域,涉及对智能设备和环境中的数字证据进行调查和分析。随着智能环境不断发展,数字取证调查员在检索、保留和分析数字证据方面面临新挑战。与此同时,数字取证工具和技术的最新进展提供了克服这些挑战的有希望的解决方案。在这项调查中,我们审查了智能环境中数字取证的最新进展和挑战。具体而言,我们回顾了智能环境中数字取证的当前技术和工具,并讨论了它们的优势和局限性。我们还确定了数字取证调查员在智能环境中面临的主要挑战,并提出了潜在的解决方案来克服这些挑战。我们的调查提供了智能环境时代数字取证领域最新进展和挑战的全面概述,并旨在为未来研究提供信息。
更新时间: 2023-05-12 22:23:59
领域: cs.CR
Private and Communication-Efficient Algorithms for Entropy Estimation
Modern statistical estimation is often performed in a distributed setting where each sample belongs to a single user who shares their data with a central server. Users are typically concerned with preserving the privacy of their samples, and also with minimizing the amount of data they must transmit to the server. We give improved private and communication-efficient algorithms for estimating several popular measures of the entropy of a distribution. All of our algorithms have constant communication cost and satisfy local differential privacy. For a joint distribution over many variables whose conditional independence is given by a tree, we describe algorithms for estimating Shannon entropy that require a number of samples that is linear in the number of variables, compared to the quadratic sample complexity of prior work. We also describe an algorithm for estimating Gini entropy whose sample complexity has no dependence on the support size of the distribution and can be implemented using a single round of concurrent communication between the users and the server. In contrast, the previously best-known algorithm has high communication cost and requires the server to facilitate interaction between the users. Finally, we describe an algorithm for estimating collision entropy that generalizes the best known algorithm to the private and communication-efficient setting.
Updated: 2023-05-12 20:35:10
标题: 私密且通信高效的熵估计算法
摘要: 现代统计估计通常在分布式环境中进行,每个样本属于单个用户,用户与中央服务器共享其数据。用户通常关心保护其样本的隐私,并且尽量减少他们必须传输给服务器的数据量。我们提出了改进的私有和通信高效的算法,用于估计分布熵的几种流行度量。我们所有的算法都具有恒定的通信成本,并满足局部差分隐私。对于一个由树给定条件独立性的多变量联合分布,我们描述了估计香农熵的算法,其所需样本数量与变量数量成线性关系,而前期工作的样本复杂度为二次。我们还描述了一种估计基尼熵的算法,其样本复杂度不依赖于分布的支持大小,并且可以使用用户和服务器之间的单轮并发通信实现。相比之下,先前已知的最佳算法具有高通信成本,并要求服务器促进用户之间的交互。最后,我们描述了一种估计碰撞熵的算法,将最佳已知算法推广到私有和通信高效的环境中。
更新时间: 2023-05-12 20:35:10
领域: cs.LG,cs.CR,cs.IT,math.IT,math.ST,stat.TH
Quantum Lock: A Provable Quantum Communication Advantage
Physical unclonable functions(PUFs) provide a unique fingerprint to a physical entity by exploiting the inherent physical randomness. Gao et al. discussed the vulnerability of most current-day PUFs to sophisticated machine learning-based attacks. We address this problem by integrating classical PUFs and existing quantum communication technology. Specifically, this paper proposes a generic design of provably secure PUFs, called hybrid locked PUFs(HLPUFs), providing a practical solution for securing classical PUFs. An HLPUF uses a classical PUF(CPUF), and encodes the output into non-orthogonal quantum states to hide the outcomes of the underlying CPUF from any adversary. Here we introduce a quantum lock to protect the HLPUFs from any general adversaries. The indistinguishability property of the non-orthogonal quantum states, together with the quantum lockdown technique prevents the adversary from accessing the outcome of the CPUFs. Moreover, we show that by exploiting non-classical properties of quantum states, the HLPUF allows the server to reuse the challenge-response pairs for further client authentication. This result provides an efficient solution for running PUF-based client authentication for an extended period while maintaining a small-sized challenge-response pairs database on the server side. Later, we support our theoretical contributions by instantiating the HLPUFs design using accessible real-world CPUFs. We use the optimal classical machine-learning attacks to forge both the CPUFs and HLPUFs, and we certify the security gap in our numerical simulation for construction which is ready for implementation.
Updated: 2023-05-12 17:48:06
标题: 量子锁:可证明的量子通信优势
摘要: 物理不可克隆功能(PUFs)通过利用固有的物理随机性为物理实体提供了一个唯一的指纹。高等人讨论了大多数现今PUFs对复杂的基于机器学习的攻击的脆弱性。我们通过整合经典PUFs和现有的量子通信技术来解决这个问题。具体来说,本文提出了一个名为混合锁定PUFs(HLPUFs)的可证安全PUFs的通用设计,为保护经典PUFs提供了一个实用的解决方案。HLPUFs使用一个经典PUF(CPUF),并将输出编码成非正交的量子态,以隐藏底层CPUF的结果免受任何对手的干扰。在这里,我们引入了一个量子锁,以保护HLPUFs免受任何一般对手的攻击。非正交量子态的不可辨识特性,以及量子锁定技术,阻止了对手获取CPUFs结果的可能性。此外,我们展示了通过利用量子态的非经典特性,HLPUF允许服务器重复使用挑战-响应对进行进一步的客户端认证。这个结果为在保持服务器端上小型挑战-响应对数据库的同时,长时间运行基于PUF的客户端认证提供了一个有效的解决方案。随后,我们通过使用可访问的真实世界CPUFs实例化HLPUFs设计来支持我们的理论贡献。我们使用最优的经典机器学习攻击来伪造CPUFs和HLPUFs,并在我们的数值模拟中证明了我们的建构中的安全差距,这已经准备好实施。
更新时间: 2023-05-12 17:48:06
领域: quant-ph,cs.CR
Unconditionally Secure Access Control Encryption
Access control encryption (ACE) enforces, through a sanitizer as the mediator, that only legitimate sender-receiver pairs can communicate, without the sanitizer knowing the communication metadata, including its sender and recipient identity, the policy over them, and the underlying plaintext. Any illegitimate transmission is indistinguishable from pure noise. Existing works focused on computational security and require trapdoor functions and possibly other heavyweight primitives. We present the first ACE scheme with information-theoretic security (unconditionally against unbounded adversaries). Our novel randomization techniques over matrices realize sanitization (traditionally via homomorphism over a fixed randomness space) such that the secret message in the hidden message subspace remains intact if and only if there is no illegitimate transmission.
Updated: 2023-05-12 16:37:42
标题: 无条件安全访问控制加密
摘要: 访问控制加密(ACE)通过一个作为中介的过滤器来强制执行,只有合法的发件人-收件人对才能进行通信,而过滤器不知道通信元数据,包括其发件人和收件人身份,其策略以及底层明文。任何非法传输都无法与纯噪音区分。现有作品侧重于计算安全,并需要陷门函数和可能其他复杂的原语。我们提出了第一个具有信息论安全性(无条件地对抗无限制对手)的ACE方案。我们的新颖随机化技术在矩阵上实现了消毒(传统上通过在固定随机空间上的同态实现),这样只有在没有非法传输时,隐藏消息子空间中的秘密消息才能保持完整。
更新时间: 2023-05-12 16:37:42
领域: cs.CR,cs.IT,math.IT
Comparison of machine learning models applied on anonymized data with different techniques
Anonymization techniques based on obfuscating the quasi-identifiers by means of value generalization hierarchies are widely used to achieve preset levels of privacy. To prevent different types of attacks against database privacy it is necessary to apply several anonymization techniques beyond the classical k-anonymity or $\ell$-diversity. However, the application of these methods is directly connected to a reduction of their utility in prediction and decision making tasks. In this work we study four classical machine learning methods currently used for classification purposes in order to analyze the results as a function of the anonymization techniques applied and the parameters selected for each of them. The performance of these models is studied when varying the value of k for k-anonymity and additional tools such as $\ell$-diversity, t-closeness and $\delta$-disclosure privacy are also deployed on the well-known adult dataset.
Updated: 2023-05-12 12:34:07
标题: 不同技术应用于匿名数据的机器学习模型比较
摘要: 基于通过数值概括层次结构对准标识符进行混淆的匿名化技术被广泛应用于实现预设的隐私级别。为防止针对数据库隐私的不同类型攻击,有必要应用多种匿名化技术,超越经典的k-匿名或$\ell$-多样性。然而,这些方法的应用直接与它们在预测和决策任务中效用的降低相关。本研究对目前用于分类目的的四种经典机器学习方法进行研究,以分析结果与应用的匿名化技术及为每种技术选择的参数之间的关系。在变化k的值以实现k-匿名,并且在著名的成年人数据集上部署$\ell$-多样性、t-相似性和$\delta$-披露隐私等附加工具时,研究了这些模型的性能。
更新时间: 2023-05-12 12:34:07
领域: cs.LG,cs.CR,cs.DB
Differentially Private Set-Based Estimation Using Zonotopes
For large-scale cyber-physical systems, the collaboration of spatially distributed sensors is often needed to perform the state estimation process. Privacy concerns naturally arise from disclosing sensitive measurement signals to a cloud estimator that predicts the system state. To solve this issue, we propose a differentially private set-based estimation protocol that preserves the privacy of the measurement signals. Compared to existing research, our approach achieves less privacy loss and utility loss using a numerically optimized truncated noise distribution. The proposed estimator is perturbed by weaker noise than the analytical approaches in the literature to guarantee the same level of privacy, therefore improving the estimation utility. Numerical and comparison experiments with truncated Laplace noise are presented to support our approach. Zonotopes, a less conservative form of set representation, are used to represent estimation sets, giving set operations a computational advantage. The privacy-preserving noise anonymizes the centers of these estimated zonotopes, concealing the precise positions of the estimated zonotopes.
Updated: 2023-05-12 12:14:39
标题: 使用Zonotopes进行差分隐私集合估计
摘要: 对于大规模的网络物理系统,通常需要空间分布传感器的协作来执行状态估计过程。从向预测系统状态的云估计器披露敏感测量信号中自然引起隐私问题。为了解决这个问题,我们提出了一种保护测量信号隐私的不同ially private set-based估计协议。与现有研究相比,我们的方法通过数值优化的截断噪声分布实现了更少的隐私损失和效用损失。所提出的估计器受到比文献中的分析方法更弱的噪声干扰,以保证相同水平的隐私,从而提高了估计效用。通过使用截断拉普拉斯噪声的数值和比较实验来支持我们的方法。Zonotopes,一种比较保守的集合表示形式,用于表示估计集合,使集合操作具有计算优势。保护隐私的噪声使这些估计zonotopes的中心匿名化,隐藏了估计zonotopes的精确位置。
更新时间: 2023-05-12 12:14:39
领域: cs.CR,cs.SY,eess.SY
Two-in-One: A Model Hijacking Attack Against Text Generation Models
Machine learning has progressed significantly in various applications ranging from face recognition to text generation. However, its success has been accompanied by different attacks. Recently a new attack has been proposed which raises both accountability and parasitic computing risks, namely the model hijacking attack. Nevertheless, this attack has only focused on image classification tasks. In this work, we broaden the scope of this attack to include text generation and classification models, hence showing its broader applicability. More concretely, we propose a new model hijacking attack, Ditto, that can hijack different text classification tasks into multiple generation ones, e.g., language translation, text summarization, and language modeling. We use a range of text benchmark datasets such as SST-2, TweetEval, AGnews, QNLI, and IMDB to evaluate the performance of our attacks. Our results show that by using Ditto, an adversary can successfully hijack text generation models without jeopardizing their utility.
Updated: 2023-05-12 12:13:27
标题: 双重攻击:一种针对文本生成模型的模型劫持攻击
摘要: 机器学习在各种应用中取得了显著进展,范围从人脸识别到文本生成。然而,其成功伴随着不同的攻击。最近提出了一种新的攻击,即模型劫持攻击,这提高了问责和寄生计算风险。然而,这种攻击只集中在图像分类任务上。在这项工作中,我们将该攻击范围扩大到包括文本生成和分类模型,从而展示其更广泛的适用性。更具体地,我们提出了一种新的模型劫持攻击,名为Ditto,可以将不同的文本分类任务劫持为多个生成任务,例如语言翻译、文本摘要和语言建模。我们使用一系列文本基准数据集,如SST-2、TweetEval、AGnews、QNLI和IMDB来评估我们攻击的性能。我们的结果表明,通过使用Ditto,攻击者可以成功地劫持文本生成模型,而不会危害其效用。
更新时间: 2023-05-12 12:13:27
领域: cs.CR,cs.CL,cs.LG
Novel bribery mining attacks in the bitcoin system and the bribery miner's dilemma
Mining attacks allow adversaries to obtain a disproportionate share of the mining reward by deviating from the honest mining strategy in the Bitcoin system. Among them, the most well-known are selfish mining (SM), block withholding (BWH), fork after withholding (FAW) and bribery mining. In this paper, we propose two novel mining attacks: bribery semi-selfish mining (BSSM) and bribery stubborn mining (BSM). Both of them can increase the relative extra reward of the adversary and will make the target bribery miners suffer from the bribery miner dilemma. All targets earn less under the Nash equilibrium. For each target, their local optimal strategy is to accept the bribes. However, they will suffer losses, comparing with denying the bribes. Furthermore, for all targets, their global optimal strategy is to deny the bribes. Quantitative analysis and simulation have been verified our theoretical analysis. We propose practical measures to mitigate more advanced mining attack strategies based on bribery mining, and provide new ideas for addressing bribery mining attacks in the future. However, how to completely and effectively prevent these attacks is still needed on further research.
Updated: 2023-05-12 11:17:57
标题: 比特币系统中的新型贿赂挖矿攻击和贿赂挖矿者的困境
摘要: 挖矿攻击使敌对方能够通过偏离比特币系统中的诚实挖矿策略获取不成比例的挖矿奖励。其中,最为人熟知的有自私挖矿(SM)、区块隐藏(BWH)、隐藏后分叉(FAW)和贿赂挖矿。在本文中,我们提出了两种新颖的挖矿攻击:贿赂半自私挖矿(BSSM)和贿赂固执挖矿(BSM)。这两种攻击方式都能够增加敌对方的相对额外奖励,并使目标贿赂矿工陷入困境。在纳什均衡下,所有目标都会获得更少的奖励。对于每个目标来说,他们的局部最优策略是接受贿赂。然而,与拒绝贿赂相比,他们会遭受损失。此外,对于所有目标来说,他们的全局最优策略是拒绝贿赂。我们的理论分析得到了定量分析和仿真验证。我们提出了针对基于贿赂挖矿的更加先进挖矿攻击策略的实用措施,并为未来解决贿赂挖矿攻击提供了新想法。然而,如何完全和有效地防止这些攻击仍需要进一步研究。
更新时间: 2023-05-12 11:17:57
领域: cs.GT,cs.CE,cs.CR
Simplification of General Mixed Boolean-Arithmetic Expressions: GAMBA
Malware code often resorts to various self-protection techniques to complicate analysis. One such technique is applying Mixed-Boolean Arithmetic (MBA) expressions as a way to create opaque predicates and diversify and obfuscate the data flow. In this work we aim to provide tools for the simplification of nonlinear MBA expressions in a very practical context to compete in the arms race between the generation of hard, diverse MBAs and their analysis. The proposed algorithm GAMBA employs algebraic rewriting at its core and extends SiMBA. It achieves efficient deobfuscation of MBA expressions from the most widely tested public datasets and simplifies expressions to their ground truths in most cases, surpassing peer tools.
Updated: 2023-05-12 07:45:57
标题: General Mixed Boolean-Arithmetic Expressions简化:GAMBA
摘要: 恶意软件代码通常会采用各种自我保护技术来使分析变得复杂。其中一种技术是应用混合布尔算术(MBA)表达式作为创建不透明断言、多样化和混淆数据流的一种方式。 在这项工作中,我们旨在提供工具来简化非线性MBA表达式,以在生成复杂、多样化MBA和其分析之间的竞争中取得优势。所提出的算法GAMBA以代数重写为核心,并扩展了SiMBA。它能够有效地解密来自最广泛测试的公共数据集的MBA表达式,并在大多数情况下简化表达式至其基本真相,超越同行工具。
更新时间: 2023-05-12 07:45:57
领域: cs.CR
Can we Quantify Trust? Towards a Trust-based Resilient SIoT Network
The emerging yet promising paradigm of the Social Internet of Things (SIoT) integrates the notion of the Internet of Things with human social networks. In SIoT, objects, i.e., things, have the capability to socialize with the other objects in the SIoT network and can establish their social network autonomously by modeling human behaviour. The notion of trust is imperative in realizing these characteristics of socialization in order to assess the reliability of autonomous collaboration. The perception of trust is evolving in the era of SIoT as an extension to traditional security triads in an attempt to offer secure and reliable services, and is considered as an imperative aspect of any SIoT system for minimizing the probable risk of autonomous decision-making. This research investigates the idea of trust quantification by employing trust measurement in terms of direct trust, indirect trust as a recommendation, and the degree of SIoT relationships in terms of social similarities (community-of-interest, friendship, and co-work relationships). A weighted sum approach is subsequently employed to synthesize all the trust features in order to ascertain a single trust score. The experimental evaluation demonstrates the effectiveness of the proposed model in segregating trustworthy and untrustworthy objects and via identifying the dynamic behaviour (i.e., trust-related attacks) of the SIoT objects.
Updated: 2023-05-12 06:37:20
标题: 我们能否量化信任?走向基于信任的弹性SIoT网络
摘要: 社交物联网(SIoT)是一种新兴但有前景的范式,将物联网概念与人类社交网络相结合。在SIoT中,物体即物品具有与SIoT网络中的其他物体社交的能力,并可以通过对人类行为建模自主建立其社交网络。信任的概念对于实现这些社交特性至关重要,以评估自主协作的可靠性。在SIoT时代,信任的认知正在演变为传统安全三元组的延伸,以提供安全可靠的服务,并被视为任何SIoT系统的必要方面,以最小化自主决策可能的风险。本研究通过采用信任测量来调查信任量化的理念,包括直接信任、间接信任作为推荐,以及社交相似性方面的SIoT关系程度(兴趣社区、友谊和合作关系)。随后采用加权求和方法来综合所有信任特征,以确定单一信任分数。实验评估证明了所提出模型在区分可信和不可信对象以及识别SIoT对象的动态行为(即与信任有关的攻击)方面的有效性。
更新时间: 2023-05-12 06:37:20
领域: cs.CR,cs.SI
A Lightweight Authentication Protocol against Modeling Attacks based on a Novel LFSR-APUF
Simple authentication protocols based on conventional physical unclonable function (PUF) are vulnerable to modeling attacks and other security threats. This paper proposes an arbiter PUF based on a linear feedback shift register (LFSR-APUF). Different from the previously reported linear feedback shift register for challenge extension, the proposed scheme feeds the external random challenges into the LFSR module to obfuscate the linear mapping relationship between the challenge and response. It can prevent attackers from obtaining valid challenge-response pairs (CRPs), increasing its resistance to modeling attacks significantly. A 64-stage LFSR-APUF has been implemented on a field programmable gate array (FPGA) board. The experimental results reveal that the proposed design can effectively resist various modeling attacks such as logistic regression (LR), evolutionary strategy (ES), Artificial Neuro Network (ANN), and support vector machine (SVM) with a prediction rate of 51.79% and a slight effect on the randomness, reliability, and uniqueness. Further, a lightweight authentication protocol is established based on the proposed LFSR-APUF. The protocol incorporates a low-overhead, ultra-lightweight, novel private bit conversion Cover function that is uniquely bound to each device in the authentication network. A dynamic and timevariant obfuscation scheme in combination with the proposed LFSR-APUF is implemented in the protocol. The proposed authentication protocol not only resists spoofing attacks, physical attacks, and modeling attacks effectively, but also ensures the security of the entire authentication network by transferring important information in encrypted form from the server to the database even when the attacker completely controls the server.
Updated: 2023-05-12 05:15:46
标题: 一种基于新型LFSR-APUF的抗建模攻击的轻量级身份验证协议
摘要: 基于传统物理不可克隆函数(PUF)的简单身份验证协议容易受到建模攻击和其他安全威胁的影响。本文提出了一种基于线性反馈移位寄存器(LFSR-APUF)的仲裁PUF。与先前报道的线性反馈移位寄存器用于挑战扩展不同,该方案将外部随机挑战输入到LFSR模块中,以混淆挑战和响应之间的线性映射关系。这可以防止攻击者获取有效的挑战-响应对(CRPs),显著提高其对建模攻击的抵抗力。在可编程门阵列(FPGA)板上实现了一个64级LFSR-APUF。实验结果表明,所提出的设计可以有效抵抗各种建模攻击,如逻辑回归(LR)、进化策略(ES)、人工神经网络(ANN)和支持向量机(SVM),预测率为51.79%,对随机性、可靠性和独特性产生轻微影响。此外,基于所提出的LFSR-APUF建立了一个轻量级身份验证协议。该协议整合了一个低开销、超轻量级、新颖的私有位转换Cover函数,该函数与身份验证网络中的每个设备唯一绑定。协议中实施了一个动态和时变的混淆方案,结合所提出的LFSR-APUF。所提出的身份验证协议不仅有效抵御欺骗攻击、物理攻击和建模攻击,还通过将重要信息以加密形式从服务器传输到数据库,确保整个身份验证网络的安全性,即使攻击者完全控制了服务器。
更新时间: 2023-05-12 05:15:46
领域: cs.CR
Adversarial Security and Differential Privacy in mmWave Beam Prediction in 6G networks
In the forthcoming era of 6G, the mmWave communication is envisioned to be used in dense user scenarios with high bandwidth requirements, that necessitate efficient and accurate beam prediction. Machine learning (ML) based approaches are ushering as a critical solution for achieving such efficient beam prediction for 6G mmWave communications. However, most contemporary ML classifiers are quite susceptible to adversarial inputs. Attackers can easily perturb the methodology through noise addition in the model itself. To mitigate this, the current work presents a defensive mechanism for attenuating the adversarial attacks against projected ML-based models for mmWave beam anticipation by incorporating adversarial training. Furthermore, as training 6G mmWave beam prediction model necessitates the use of large and comprehensive datasets that could include sensitive information regarding the user's location, differential privacy (DP) has been introduced as a technique to preserve the confidentiality of the information by purposefully adding a low sensitivity controlled noise in the datasets. It ensures that even if the information about a user location could be retrieved, the attacker would have no means to determine whether the information is significant or meaningless. With ray-tracing simulations for various outdoor and indoor scenarios, we illustrate the advantage of our proposed novel framework in terms of beam prediction accuracy and effective achievable rate while ensuring the security and privacy in communications.
Updated: 2023-05-12 04:58:40
标题: 对抗安全和差分隐私在6G网络中毫米波波束预测中的应用
摘要: 在即将到来的6G时代,毫米波通信被设想用于高带宽需求的密集用户场景,这需要高效准确的波束预测。基于机器学习(ML)的方法正成为实现6G毫米波通信高效波束预测的关键解决方案。然而,大多数当代ML分类器对敌对输入相当敏感。攻击者可以通过在模型本身中添加噪声来轻易扰乱方法。为了减轻这种情况,当前工作提出了一种防御机制,用于减弱针对毫米波波束预测ML模型的敌对攻击,通过混合敌对训练。此外,由于训练6G毫米波波束预测模型需要使用大量和全面的数据集,其中可能包含用户位置等敏感信息,微分隐私(DP)已被引入作为一种技术,通过故意在数据集中添加低灵敏度的受控噪声来保护信息的机密性。这确保即使可以检索有关用户位置的信息,攻击者也无法确定信息是否重要或无意义。通过各种户外和室内场景的射线跟踪模拟,我们展示了我们提出的新框架在波束预测准确性和有效可实现速率方面的优势,同时确保通信的安全性和隐私性。
更新时间: 2023-05-12 04:58:40
领域: cs.CR
Privacy-Preserving Adaptive Traffic Signal Control in a Connected Vehicle Environment
Although Connected Vehicles (CVs) have demonstrated tremendous potential to enhance traffic operations, they can impose privacy risks on individual travelers, e.g., leaking sensitive information about their frequently visited places, routing behavior, etc. Despite the large body of literature that devises various algorithms to exploit CV information, research on privacy-preserving traffic control is still in its infancy. In this paper, we aim to fill this research gap and propose a privacy-preserving adaptive traffic signal control method using CV data. Specifically, we leverage secure Multi-Party Computation and differential privacy to devise a privacy-preserving CV data aggregation mechanism, which can calculate key traffic quantities without any CVs having to reveal their private data. We further develop a linear optimization model for adaptive signal control based on the traffic variables obtained via the data aggregation mechanism. The proposed linear programming problem is further extended to a stochastic programming problem to explicitly handle the noises added by the differentially private mechanism. Evaluation results show that the linear optimization model preserves privacy with a marginal impact on control performance, and the stochastic programming model can significantly reduce residual queues compared to the linear programming model, with almost no increase in vehicle delay. Overall, our methods demonstrate the feasibility of incorporating privacy-preserving mechanisms in CV-based traffic modeling and control, which guarantees both utility and privacy.
Updated: 2023-05-12 03:00:30
标题: 在连接车辆环境中隐私保护的自适应交通信号控制
摘要: 尽管联网车辆(CVs)已经展示出极大的潜力来增强交通运营,但它们可能给个体旅行者带来隐私风险,例如泄露关于他们经常访问的地方、路由行为等敏感信息。尽管有大量文献提出各种算法来利用CV信息,但关于保护隐私的交通控制研究仍处于起步阶段。本文旨在填补这一研究空白,提出了一种使用CV数据的隐私保护自适应交通信号控制方法。具体地,我们利用安全多方计算和差分隐私来设计一种隐私保护的CV数据聚合机制,该机制可以计算关键的交通数量,而无需任何CV透露其私人数据。我们进一步开发了一个基于数据聚合机制获取的交通变量的自适应信号控制的线性优化模型。所提出的线性规划问题进一步扩展为随机规划问题,以明确处理不同ially私有机制添加的噪音。评估结果表明,线性优化模型在控制性能上对隐私的影响较小,而随机规划模型可以明显减少残余队列与线性规划模型相比,车辆延迟几乎没有增加。总的来说,我们的方法展示了在基于CV的交通建模和控制中整合隐私保护机制的可行性,这保证了效用和隐私。
更新时间: 2023-05-12 03:00:30
领域: eess.SY,cs.CR,cs.SY,math.OC
Stratified Adversarial Robustness with Rejection
Recently, there is an emerging interest in adversarially training a classifier with a rejection option (also known as a selective classifier) for boosting adversarial robustness. While rejection can incur a cost in many applications, existing studies typically associate zero cost with rejecting perturbed inputs, which can result in the rejection of numerous slightly-perturbed inputs that could be correctly classified. In this work, we study adversarially-robust classification with rejection in the stratified rejection setting, where the rejection cost is modeled by rejection loss functions monotonically non-increasing in the perturbation magnitude. We theoretically analyze the stratified rejection setting and propose a novel defense method -- Adversarial Training with Consistent Prediction-based Rejection (CPR) -- for building a robust selective classifier. Experiments on image datasets demonstrate that the proposed method significantly outperforms existing methods under strong adaptive attacks. For instance, on CIFAR-10, CPR reduces the total robust loss (for different rejection losses) by at least 7.3% under both seen and unseen attacks.
Updated: 2023-05-12 01:00:57
标题: 分层对抗鲁棒性及拒绝
摘要: 最近,对采用拒绝选项(也称为选择性分类器)进行对抗性训练以提升对抗性鲁棒性的兴趣逐渐增强。尽管在许多应用中,拒绝可能会带来成本,但现有研究通常将拒绝扰动输入的成本与零成本相关联,这可能导致拒绝许多略有扰动的输入,这些输入本应被正确分类。在这项研究中,我们研究了具有拒绝选项的对抗性鲁棒分类,在分层拒绝设置中,拒绝成本由拒绝损失函数建模,该函数在扰动幅度上单调非递增。我们在理论上分析了分层拒绝设置,并提出了一种新的防御方法——基于一致预测拒绝的对抗性训练(CPR)——用于构建强大的选择性分类器。对图像数据集的实验表明,所提出的方法在强大的自适应攻击下明显优于现有方法。例如,在CIFAR-10上,通过CPR,无论是已知还是未知的攻击,都可以将总体鲁棒损失(针对不同的拒绝损失)至少降低7.3%。
更新时间: 2023-05-12 01:00:57
领域: cs.LG,cs.CR,cs.CV