dataset size for training #387

Open
opened 2023-03-07 10:13:56 +01:00 by tyrink · 1 comment
tyrink commented 2023-03-07 10:13:56 +01:00 (Migrated from github.com)

Hi, the results of your method seems good, but I wonder if you chose a subset of the original vggface-2 for training and what's the size of your selected training set?

Hi, the results of your method seems good, but I wonder if you chose a subset of the original vggface-2 for training and what's the size of your selected training set?
sjokic commented 2023-03-13 17:19:43 +01:00 (Migrated from github.com)

According to their paper: "To improve the quality of our training set, we remove images with size smaller than 250 × 250. We
align and crop the images to a standard position with size 224 × 224". So yes, they choose a subset of VGGFace2 and use 224x224 for all images in the training data set. You can find a download link for this data set in the readme of this repo.

According to their paper: "To improve the quality of our training set, we remove images with size smaller than 250 × 250. We align and crop the images to a standard position with size 224 × 224". So yes, they choose a subset of VGGFace2 and use 224x224 for all images in the training data set. You can find a download link for this data set in the readme of this repo.
Sign in to join this conversation.