Follow
Vladislav Lialin
Title
Cited by
Cited by
Year
Scaling down to scale up: A guide to parameter-efficient fine-tuning
V Lialin, V Deshpande, A Rumshisky
arXiv preprint arXiv:2303.15647, 2023
752023
ReLoRA: High-Rank Training Through Low-Rank Updates
V Lialin, S Muckatira, N Shivagunde, A Rumshisky
Workshop on Advancing Neural Network Training: Computational Efficiency …, 2023
19*2023
Learning to ask like a physician
E Lehman, V Lialin, KY Legaspi, AJR Sy, PTS Pile, NRI Alberto, ...
arXiv preprint arXiv:2206.02696, 2022
162022
Named entity recognition in noisy domains
V Malykh, V Lyalin
2018 international conference on artificial intelligence applications and …, 2018
122018
Update frequently, update fast: Retraining semantic parsing systems in a fraction of time
V Lialin, R Goel, A Simanovsky, A Rumshisky, R Shah
arXiv preprint arXiv:2010.07865, 2020
9*2020
Life after BERT: What do Other Muppets Understand about Language?
V Lialin, K Zhao, N Shivagunde, A Rumshisky
arXiv preprint arXiv:2205.10696, 2022
82022
Honey, I shrunk the language: Language model behavior at reduced scale
V Deshpande, D Pechi, S Thatte, V Lialin, A Rumshisky
arXiv preprint arXiv:2305.17266, 2023
52023
Scalable and accurate self-supervised multimodal representation learning without aligned video and text data
V Lialin, S Rawls, D Chan, S Ghosh, A Rumshisky, W Hamza
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2023
52023
Let's Reinforce Step by Step
S Pan, V Lialin, S Muckatira, A Rumshisky
arXiv preprint arXiv:2311.05821, 2023
12023
Improving Classification Robustness for Noisy Texts with Robust Word Vectors
V Malykh, V Lyalin
Journal of Mathematical Sciences 273 (4), 605-613, 2023
12023
Emergent Abilities in Reduced-Scale Generative Language Models
S Muckatira, V Deshpande, V Lialin, A Rumshisky
arXiv preprint arXiv:2404.02204, 2024
2024
Deconstructing In-Context Learning: Understanding Prompts via Corruption
N Shivagunde, V Lialin, S Muckatira, A Rumshisky
arXiv preprint arXiv:2404.02054, 2024
2024
Recent Advances, Applications, and Open Challenges in Machine Learning for Health: Reflections from Research Roundtables at ML4H 2023 Symposium
H Jeong, S Jabbour, Y Yang, R Thapta, H Mozannar, WJ Han, ...
arXiv preprint arXiv:2403.01628, 2024
2024
Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
N Shivagunde, V Lialin, A Rumshisky
arXiv preprint arXiv:2303.16445, 2023
2023
Injecting Hierarchy with U-Net Transformers
D Donahue, V Lialin, A Rumshisky
arXiv preprint arXiv:1910.10488, 2019
2019
NarrativeTime: Dense Temporal Annotation on a Timeline
A Rogers, M Karpinska, A Gupta, V Lialin, G Smelkov, A Rumshisky
arXiv preprint arXiv:1908.11443, 2019
2019
К вопросу о классификации зашумленных текстов
ВА Малых, ВА Лялин
Труды Института системного анализа Российской академии наук 68 (S1), 174-182, 2018
2018
TEXT IS AN IMAGE: AUGMENTATION VIA EMBEDDING MIXING
K Zhao, V Lialin, A Rumshisky
The system can't perform the operation now. Try again later.
Articles 1–18