brain of mat kelcey

( with a fun exploration of easy/hard positive/negative mining ) say we want to train a neural net whose output is an embedding of its input. how would we train it? to train anything we need a loss function and usually that function describes exactly what we want the output to be in the form of explicit labels. but for embeddings things are a little different. we might not care exactly what embedding values we get, we only care about how they are related for different inputs. one way of specifying this relationship for embeddings is with an idea called triplet loss. let's consider three particular instances... triplet loss is a way of specifying that we don't care exactly where things get embedded, only that the anchor should be closer to the positive than to the negative. we can express this idea of closer in a loss function in a simple clever way... consider two distances; the distance from the anchor to the positive (dist_anchor_positive) and the distance from the ancho...

1 mentions: @mat_kelcey
Date: 2019/06/13 17:16

Referring Tweets

@mat_kelcey "pybullet grasping & time contrastive network embeddings" blog: code: