variables: 736554
Data license: CC-BY
This data as json
id | name | unit | description | createdAt | updatedAt | code | coverage | timespan | datasetId | sourceId | shortUnit | display | columnOrder | originalMetadata | grapherConfigAdmin | shortName | catalogPath | dimensions | schemaVersion | processingLevel | processingLog | titlePublic | titleVariant | attributionShort | attribution | descriptionShort | descriptionFromProducer | descriptionKey | descriptionProcessing | licenses | license | grapherConfigETL | type | sort | dataChecksum | metadataChecksum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
736554 | Top-1 accuracy - state of the art | % | The top-1 accuracy measure is used to assess how frequently a model's absolute top prediction matches the correct answer from a given set of options. Here's an example to illustrate what this benchmark tests: Imagine an image classification model that is presented with an image of an animal. The model assigns probabilities to each potential label and generates its highest-confidence prediction. For instance, when analyzing an image, the model might predict "Cat" as the most probable label. To evaluate the model's accuracy using the top-1 measure, researchers compare this prediction with the correct label. If the model's top prediction matches the correct label (e.g., if the actual animal in the image is indeed a cat), then the model's prediction is considered correct according to the top-1 accuracy metric. On the other hand, if the model's top prediction does not match the correct label (e.g., if the image shows a dog, but the model predicts a cat), then the model's prediction is considered incorrect based on the top-1 accuracy measure. To calculate the top-1 accuracy, researchers analyze the model's performance on a large dataset where the correct labels are known. They determine the percentage of examples in the dataset where the model's highest-confidence prediction matches the actual label. This measure provides a focused evaluation of the model's ability to make accurate predictions by considering only its absolute top guess. | 2023-07-03 14:54:56 | 2024-07-08 16:38:15 | 6103 | 29583 | % | { "name": "Top-1 accuracy", "unit": "%", "zeroDay": "2019-01-01", "shortUnit": "%", "yearIsDay": true, "numDecimalPlaces": 1 } |
0 | papers_with_code_imagenet_top1_state_of_the_art | grapher/artificial_intelligence/2023-06-14/papers_with_code_benchmarks_state_of_the_art/papers_with_code_benchmarks_state_of_the_art#papers_with_code_imagenet_top1_state_of_the_art | 1 | [] |
float | [] |
c3edb83eb26f04725e38dd677ed265d8 | 407fbeaf907a93a51011e14827e7880e |