Follow
Hitomi Yanaka
Hitomi Yanaka
Verified email at is.s.u-tokyo.ac.jp - Homepage
Title
Cited by
Cited by
Year
Can neural networks understand monotonicity reasoning?
H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos
arXiv preprint arXiv:1906.06448, 2019
892019
HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning
H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos
arXiv preprint arXiv:1904.12166, 2019
662019
Do neural models learn systematicity of monotonicity inference in natural language?
H Yanaka, K Mineshima, D Bekki, K Inui
arXiv preprint arXiv:2004.14839, 2020
552020
Acquisition of phrase correspondences using natural deduction proofs
H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki
arXiv preprint arXiv:1804.07656, 2018
262018
Compositional evaluation on Japanese textual entailment and similarity
H Yanaka, K Mineshima
Transactions of the Association for Computational Linguistics 10, 1266-1284, 2022
242022
Exploring transitivity in neural NLI models through veridicality
H Yanaka, K Mineshima, K Inui
arXiv preprint arXiv:2101.10713, 2021
212021
Multimodal logical inference system for visual-textual entailment
R Suzuki, H Yanaka, M Yoshikawa, K Mineshima, D Bekki
arXiv preprint arXiv:1906.03952, 2019
192019
Do grammatical error correction models realize grammatical generalization?
M Mita, H Yanaka
arXiv preprint arXiv:2106.03031, 2021
162021
Assessing the generalization capacity of pre-trained language models through Japanese adversarial natural language inference
H Yanaka, K Mineshima
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021
142021
SyGNS: A systematic generalization testbed based on natural language semantics
H Yanaka, K Mineshima, K Inui
arXiv preprint arXiv:2106.01077, 2021
132021
Determining semantic textual similarity using natural deduction proofs
H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki
arXiv preprint arXiv:1707.08713, 2017
82017
On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
T Kojima, I Okimura, Y Iwasawa, H Yanaka, Y Matsuo
arXiv preprint arXiv:2404.02431, 2024
72024
Compositional semantics and inference system for temporal order based on Japanese CCG
T Sugimoto, H Yanaka
arXiv preprint arXiv:2204.09245, 2022
62022
Logical inference for counting on semi-structured tables
T Kurosawa, H Yanaka
arXiv preprint arXiv:2204.07803, 2022
42022
Neural sentence generation from formal semantics
K Manome, M Yoshikawa, H Yanaka, P Martínez-Gómez, K Mineshima, ...
Proceedings of the 11th International Conference on Natural Language …, 2018
42018
Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models
T Sugimoto, Y Onoe, H Yanaka
arXiv preprint arXiv:2306.10727, 2023
32023
Is Japanese CCGBank empirically correct? A case study of passive and causative constructions
D Bekki, H Yanaka
arXiv preprint arXiv:2302.14708, 2023
32023
Clustering documents on case vectors represented by predicate-argument structures-applied for eliciting technological problems from patents
H Yanaka, Y Ohsawa
2016 Federated Conference on Computer Science and Information Systems …, 2016
32016
Llm-jp: A cross-organizational project for the research and development of fully open japanese llms
A Aizawa, E Aramaki, B Chen, F Cheng, H Deguchi, R Enomoto, K Fujii, ...
arXiv preprint arXiv:2407.03963, 2024
22024
Analyzing Social Biases in Japanese Large Language Models
H Yanaka, H Namgi, R Kumon, J Lu, M Takeshita, R Sekizawa, T Kato, ...
arXiv preprint arXiv:2406.02050, 2024
22024
The system can't perform the operation now. Try again later.
Articles 1–20