variables: 736541
Data license: CC-BY
This data as json
id | name | unit | description | createdAt | updatedAt | code | coverage | timespan | datasetId | sourceId | shortUnit | display | columnOrder | originalMetadata | grapherConfigAdmin | shortName | catalogPath | dimensions | schemaVersion | processingLevel | processingLog | titlePublic | titleVariant | attributionShort | attribution | descriptionShort | descriptionFromProducer | descriptionKey | descriptionProcessing | licenses | license | grapherConfigETL | type | sort | dataChecksum | metadataChecksum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
736541 | Top-5 accuracy | % | The top-5 accuracy measure is used to assess how frequently a model's top five predictions include the correct answer from a list of 1000 options. Here's an example to illustrate what this benchmark tests: When an image classification model is presented with an image of an animal, it will assign probabilities to each possible label. Based on these probabilities, the model generates its top five predictions out of a total of 1000 animal labels. For instance, the model might output the following predictions as its top five guesses: * Cat * Dog * Elephant * Lion * Tiger Suppose the correct label for the image is "dog." If "dog" appears among the model's top five predictions, then the model's prediction is considered correct according to the top-5 accuracy metric. On the other hand, if the correct label is "giraffe" and "giraffe" is not included in the model's top five predictions, then the model's prediction would be considered incorrect based on the top-5 accuracy measure. To calculate the top-5 accuracy, researchers evaluate the model's performance on a large dataset with known labels. They compute the percentage of examples in the dataset where the correct label is present within the model's top five predictions out of the 1000 possible options. This measure provides a broader perspective on the model's performance by considering whether the correct answer is among its top guesses, even if it's not the model's absolute top prediction. | 2023-07-03 14:54:48 | 2024-07-08 15:20:31 | 6102 | 29574 | % | { "unit": "%", "zeroDay": "2019-01-01", "shortUnit": "%", "yearIsDay": true, "numDecimalPlaces": 0 } |
0 | papers_with_code_imagenet_top5 | grapher/artificial_intelligence/2023-06-14/papers_with_code_benchmarks/papers_with_code_benchmarks#papers_with_code_imagenet_top5 | 1 | [] |
float | [] |
87596d558e2e33107e675da91d259c0c | 6dda9259f7ebf5d6eeba088e2c7b7d34 |