LIP: Overview

Look into Person (LIP) is a new large-scale dataset, focus on semantic understanding of person. Following are the detailed descriptions. The dataset contains 50,000 images with elaborated pixel-wise annotations with 19 semantic human part labels and 2D human poses with 16 key points. The annotated 50,000 images are cropped person instances from COCO dataset with size larger than 50 * 50.The images collected from the real-world scenarios contain human appearing with challenging poses and views,

1 mentions: @shion_honda
Date: 2019/08/16 15:47

Referring Tweets

@shion_honda MG-VTONで作られたデータセット「MPV」はこちらからダウンロードできる…はずですが、なぜか失敗してしまいました。 35687人と13524アイテムから62780枚の試着画像を集めた、とのことです。

Related Entries

Read more Dataset for Semantic Urban Scene Understanding
0 users, 0 mentions 2018/10/12 14:56
Read more On Building an Instagram Street Art Dataset and Detection Model
0 users, 38 mentions 2019/01/29 20:59
Read more GitHub - allenai/PeerRead: Data and code for Kang et al., NAACL 2018's paper titled "A Dataset of Pe...
0 users, 0 mentions 2018/05/01 07:07
Read more COCO-Text: Dataset for Text Detection and Recognition | SE(3) Computer Vision Group at Cornell Tech
2 users, 1 mentions 2019/04/01 14:17
Read more GitHub - YumaKoizumi/ToyADMOS-dataset: Dataset and its sample codes for anomaly detection in sound.
1 users, 0 mentions 2019/08/13 02:16