[1910.14667] Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

We present a systematic study of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Through extensive experiments, we benchmark the effectiveness of adversarially trained patches under both white-box and black-box settings, and quantify transferability of attacks between datasets, object classes, and detector models. Finally, we present a detailed study of physical world attacks using printed posters and wearable clothes, and rigorously quantify the performance of such attacks with different metrics.

4 mentions: @roadrunning01@asam9891@cynicalsecurity
Date: 2019/11/08 09:50

Referring Tweets

@asam9891 Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors イケてるFig. 1のcaption で話題になってたやつ。物体検出器に対するAdversarial Patch が、異なる検出器やデータセットに対してどの程度攻撃できるか評価した文献。すごい実用的な研究だ t.co/yxKhmQhS7q
@roadrunning01 Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors pdf: t.co/E6ucABT70z abs: t.co/Z39OV1A7nQ t.co/cGzouPaTHm

Related Entries

Read more Dataset for Semantic Urban Scene Understanding
0 users, 0 mentions 2018/10/12 14:56
Read more GitHub - muhaochen/bilingual_dictionaries: This repository contains the source code and links to som...
0 users, 1 mentions 2019/09/08 09:47
Read more On Building an Instagram Street Art Dataset and Detection Model
0 users, 38 mentions 2019/01/29 20:59
Read more GitHub - allenai/PeerRead: Data and code for Kang et al., NAACL 2018's paper titled "A Dataset of Pe...
0 users, 0 mentions 2018/05/01 07:07
Read more gpt-2-output-dataset/README.md at master · openai/gpt-2-output-dataset · GitHub
0 users, 1 mentions 2019/11/08 12:52