Can neural networks understand monotonicity reasoning? H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos arXiv preprint arXiv:1906.06448, 2019 | 89 | 2019 |
HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos arXiv preprint arXiv:1904.12166, 2019 | 66 | 2019 |
Do neural models learn systematicity of monotonicity inference in natural language? H Yanaka, K Mineshima, D Bekki, K Inui arXiv preprint arXiv:2004.14839, 2020 | 55 | 2020 |
Acquisition of phrase correspondences using natural deduction proofs H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki arXiv preprint arXiv:1804.07656, 2018 | 26 | 2018 |
Compositional evaluation on Japanese textual entailment and similarity H Yanaka, K Mineshima Transactions of the Association for Computational Linguistics 10, 1266-1284, 2022 | 24 | 2022 |
Exploring transitivity in neural NLI models through veridicality H Yanaka, K Mineshima, K Inui arXiv preprint arXiv:2101.10713, 2021 | 21 | 2021 |
Multimodal logical inference system for visual-textual entailment R Suzuki, H Yanaka, M Yoshikawa, K Mineshima, D Bekki arXiv preprint arXiv:1906.03952, 2019 | 19 | 2019 |
Do grammatical error correction models realize grammatical generalization? M Mita, H Yanaka arXiv preprint arXiv:2106.03031, 2021 | 16 | 2021 |
Assessing the generalization capacity of pre-trained language models through Japanese adversarial natural language inference H Yanaka, K Mineshima Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021 | 14 | 2021 |
SyGNS: A systematic generalization testbed based on natural language semantics H Yanaka, K Mineshima, K Inui arXiv preprint arXiv:2106.01077, 2021 | 13 | 2021 |
Determining semantic textual similarity using natural deduction proofs H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki arXiv preprint arXiv:1707.08713, 2017 | 8 | 2017 |
On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons T Kojima, I Okimura, Y Iwasawa, H Yanaka, Y Matsuo arXiv preprint arXiv:2404.02431, 2024 | 7 | 2024 |
Compositional semantics and inference system for temporal order based on Japanese CCG T Sugimoto, H Yanaka arXiv preprint arXiv:2204.09245, 2022 | 6 | 2022 |
Logical inference for counting on semi-structured tables T Kurosawa, H Yanaka arXiv preprint arXiv:2204.07803, 2022 | 4 | 2022 |
Neural sentence generation from formal semantics K Manome, M Yoshikawa, H Yanaka, P Martínez-Gómez, K Mineshima, ... Proceedings of the 11th International Conference on Natural Language …, 2018 | 4 | 2018 |
Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models T Sugimoto, Y Onoe, H Yanaka arXiv preprint arXiv:2306.10727, 2023 | 3 | 2023 |
Is Japanese CCGBank empirically correct? A case study of passive and causative constructions D Bekki, H Yanaka arXiv preprint arXiv:2302.14708, 2023 | 3 | 2023 |
Clustering documents on case vectors represented by predicate-argument structures-applied for eliciting technological problems from patents H Yanaka, Y Ohsawa 2016 Federated Conference on Computer Science and Information Systems …, 2016 | 3 | 2016 |
Llm-jp: A cross-organizational project for the research and development of fully open japanese llms A Aizawa, E Aramaki, B Chen, F Cheng, H Deguchi, R Enomoto, K Fujii, ... arXiv preprint arXiv:2407.03963, 2024 | 2 | 2024 |
Analyzing Social Biases in Japanese Large Language Models H Yanaka, H Namgi, R Kumon, J Lu, M Takeshita, R Sekizawa, T Kato, ... arXiv preprint arXiv:2406.02050, 2024 | 2 | 2024 |