owid
Data license: CC-BY
id | name | description | createdAt | updatedAt | datasetId | additionalInfo | link | dataPublishedBy |
---|---|---|---|---|---|---|---|---|
29574 | Papers With Code (2023) | { "link": "https://paperswithcode.com/", "retrievedDate": "2023-06-14", "additionalInfo": "\nThe goal of Papers With Code website is to compile a comprehensive collection of ML papers, code implementations, datasets, methods, and evaluation tables, all made freely available.\n\nThe comparisons to human performance are very approximate and based on small samples of people \u2014 they are only meant to give a rough comparison. You can read more details in the papers that describe the benchmarks:\n\n-Hendrycks et al (2021) Measuring Massive Multitask Language Understanding (MMLU) (page 3): https://arxiv.org/pdf/2009.03300.pdf\n\n-Hendrycks et al (2021) Measuring Mathematical Problem Solving With the MATH Dataset (page 5): https://arxiv.org/pdf/2103.03874v2.pdf\n", "dataPublishedBy": "Papers With Code" } |
2023-12-14 12:06:37 | 2024-07-08 15:20:31 | 6102 | The goal of Papers With Code website is to compile a comprehensive collection of ML papers, code implementations, datasets, methods, and evaluation tables, all made freely available. The comparisons to human performance are very approximate and based on small samples of people — they are only meant to give a rough comparison. You can read more details in the papers that describe the benchmarks: -Hendrycks et al (2021) Measuring Massive Multitask Language Understanding (MMLU) (page 3): https://arxiv.org/pdf/2009.03300.pdf -Hendrycks et al (2021) Measuring Mathematical Problem Solving With the MATH Dataset (page 5): https://arxiv.org/pdf/2103.03874v2.pdf | https://paperswithcode.com/ | Papers With Code |
28105 | Papers With Code (2023) | { "link": "https://paperswithcode.com/", "retrievedDate": "2023-06-14", "additionalInfo": "The goal of Papers With Code website is to compile a comprehensive collection of ML papers, code implementations, datasets, methods, and evaluation tables, all made freely available.\n\nThe comparisons to human performance are very approximate and based on small samples of people \u2014 they are only meant to give a rough comparison. You can read more details in the papers that describe the benchmarks:\n\n-Hendrycks et al (2021) Measuring Massive Multitask Language Understanding (MMLU) (page 3): https://arxiv.org/pdf/2009.03300.pdf\n\n-Hendrycks et al (2021) Measuring Mathematical Problem Solving With the MATH Dataset (page 5): https://arxiv.org/pdf/2103.03874v2.pdf\n", "dataPublishedBy": "Papers With Code" } |
2023-07-25 15:52:05 | 2024-07-08 15:20:30 | 6102 | The goal of Papers With Code website is to compile a comprehensive collection of ML papers, code implementations, datasets, methods, and evaluation tables, all made freely available. The comparisons to human performance are very approximate and based on small samples of people — they are only meant to give a rough comparison. You can read more details in the papers that describe the benchmarks: -Hendrycks et al (2021) Measuring Massive Multitask Language Understanding (MMLU) (page 3): https://arxiv.org/pdf/2009.03300.pdf -Hendrycks et al (2021) Measuring Mathematical Problem Solving With the MATH Dataset (page 5): https://arxiv.org/pdf/2103.03874v2.pdf | https://paperswithcode.com/ | Papers With Code |