<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="http://dmacjam.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="http://dmacjam.github.io/" rel="alternate" type="text/html" /><updated>2026-02-12T01:09:00+00:00</updated><id>http://dmacjam.github.io/feed.xml</id><title type="html">jakub@macina.sk</title><subtitle>Jakub Macina is a Machine Learning Researcher.</subtitle><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><entry><title type="html">[Award] Forbes 30 under 30: Science &amp;amp; Education</title><link href="http://dmacjam.github.io/awards/forbes-30u30/" rel="alternate" type="text/html" title="[Award] Forbes 30 under 30: Science &amp;amp; Education" /><published>2023-05-02T00:00:00+00:00</published><updated>2023-05-02T00:00:00+00:00</updated><id>http://dmacjam.github.io/awards/forbes-30u30</id><content type="html" xml:base="http://dmacjam.github.io/awards/forbes-30u30/"><![CDATA[<p>Selected for the Forbes 30 Under 30 list in the category Science &amp; Education.
<!--more--></p>

<iframe src="https://www.linkedin.com/embed/feed/update/urn:li:share:7062457062709940224" height="539" width="504" frameborder="0" allowfullscreen="" title="Embedded post"></iframe>

<iframe width="560" height="315" src="https://www.youtube.com/embed/D4z8dle5Kig?start=71" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>

<ul>
  <li><a href="https://www.forbes.sk/mlady-talent-z-ilavy-skuma-v-zurichu-umelu-inteligenciu-moze-mat-kazdy-ziak-osobneho-kouca/">Interview</a></li>
  <li><a href="https://www.forbes.sk/prival-mladej-energie-pozrite-si-ako-to-vyzeralo-na-10-rocniku-festivalu-forbes-30-pod-30/">Gallery</a></li>
  <li><a href="">Printed magazine - Edition May 2023, page 58</a></li>
</ul>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Awards" /><category term="award" /><summary type="html"><![CDATA[Selected for the Forbes 30 Under 30 list in the category Science &amp; Education.]]></summary></entry><entry><title type="html">[Paper] EACL2023 Opportunities and Challenges in Neural Dialog Tutoring</title><link href="http://dmacjam.github.io/research/dialog-tutoring-paper/" rel="alternate" type="text/html" title="[Paper] EACL2023 Opportunities and Challenges in Neural Dialog Tutoring" /><published>2023-04-01T00:00:00+00:00</published><updated>2023-04-01T00:00:00+00:00</updated><id>http://dmacjam.github.io/research/dialog-tutoring-paper</id><content type="html" xml:base="http://dmacjam.github.io/research/dialog-tutoring-paper/"><![CDATA[<p>Jakub Macina*, Nico Daheim*, Lingzhi Wang, Tanmay Sinha, Manu Kapur, Iryna Gurevych, Mrinmaya Sachan.
In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2357–2372, Dubrovnik, Croatia. Association for Computational Linguistics.
<!--more--></p>

<p>Designing dialog tutors has been challenging as it involves modeling the diverse and complex pedagogical strategies employed by human tutors. Although there have been significant recent advances in neural conversational systems using large language models and growth in available dialog corpora, dialog tutoring has largely remained unaffected by these advances. In this paper, we rigorously analyze various generative language models on two dialog tutoring datasets for language learning using automatic and human evaluations to understand the new opportunities brought by these advances as well as the challenges we must overcome to build models that would be usable in real educational settings.We find that although current approaches can model tutoring in constrained learning scenarios when the number of concepts to be taught and possible teacher strategies are small, they perform poorly in less constrained scenarios.Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring, which measures learning opportunities for students and how engaging the dialog is.To understand the behavior of our models in a real tutoring setting, we conduct a user study using expert annotators and find a significantly large number of model reasoning errors in 45% of conversations. Finally, we connect our findings to outline future work.</p>

<p>Oral presentation at EACL 2023 in Dubrovnik as a main conference paper:</p>
<ul>
  <li><a href="https://s3.amazonaws.com/pf-user-files-01/u-59356/uploads/2023-04-11/nj23uqc/eacl-23-dialog-tutoring-q2.mp4">Video</a></li>
  <li><a href="https://aclanthology.org/2023.eacl-main.173/">EACL 2023</a></li>
  <li><a href="https://arxiv.org/pdf/2301.09919.pdf">Arxiv link</a></li>
  <li><a href="https://github.com/eth-nlped/dialog-tutoring">Github</a></li>
</ul>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Happy to share that our paper &quot;Opportunities and Challenges in Neural Dialog Tutoring&quot; will be presented at <a href="https://twitter.com/hashtag/EACL2023?src=hash&amp;ref_src=twsrc%5Etfw">#EACL2023</a>!<br />Work with: <a href="https://twitter.com/dmacjam?ref_src=twsrc%5Etfw">@dmacjam</a>, Lingzhi Wang, <a href="https://twitter.com/TanmaySinha655?ref_src=twsrc%5Etfw">@TanmaySinha655</a>, Manu Kapur, <a href="https://twitter.com/mrinmayasachan?ref_src=twsrc%5Etfw">@mrinmayasachan</a> <a href="https://twitter.com/IGurevych?ref_src=twsrc%5Etfw">@IGurevych</a><br />Find the paper on arXiv: <a href="https://t.co/fDSZbexBU4">https://t.co/fDSZbexBU4</a> <a href="https://t.co/GtYhJFKiGT">pic.twitter.com/GtYhJFKiGT</a></p>&mdash; Nico Daheim (@ndaheim_) <a href="https://twitter.com/ndaheim_/status/1618233754543362049?ref_src=twsrc%5Etfw">January 25, 2023</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Research" /><category term="research" /><summary type="html"><![CDATA[Jakub Macina*, Nico Daheim*, Lingzhi Wang, Tanmay Sinha, Manu Kapur, Iryna Gurevych, Mrinmaya Sachan. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2357–2372, Dubrovnik, Croatia. Association for Computational Linguistics.]]></summary></entry><entry><title type="html">[Paper] 2nd Math-AI Workshop at NeurIPS22: Automatic Generation of Socratic Questions for Learning to Solve Math Word Problems</title><link href="http://dmacjam.github.io/research/neurips22-mathai/" rel="alternate" type="text/html" title="[Paper] 2nd Math-AI Workshop at NeurIPS22: Automatic Generation of Socratic Questions for Learning to Solve Math Word Problems" /><published>2022-12-01T00:00:00+00:00</published><updated>2022-12-01T00:00:00+00:00</updated><id>http://dmacjam.github.io/research/neurips22-mathai</id><content type="html" xml:base="http://dmacjam.github.io/research/neurips22-mathai/"><![CDATA[<p>Jakub Macina*, Kumar Shridhar*, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, and Mrinmaya Sachan.
<!--more--></p>

<p>Presented at NeurIPS 2022 in New Orleans at 2nd MATH-AI Workshop:</p>
<ul>
  <li><a href="https://neurips.cc/virtual/2022/workshop/50015">Video</a></li>
  <li><a href="https://mathai2022.github.io/">MATH-AI Workshop at NeurIPS22</a></li>
</ul>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">How good are NLP models at solving math word problems? To improve NLP models in reasoning, in our interdisciplinary research at the intersection of <a href="https://twitter.com/hashtag/NLP?src=hash&amp;ref_src=twsrc%5Etfw">#NLP</a> and <a href="https://twitter.com/hashtag/Education?src=hash&amp;ref_src=twsrc%5Etfw">#Education</a> we took inspiration from scaffolding theory which support human learners in discovering the answer on their own. <a href="https://t.co/ahsbJ9n9u5">pic.twitter.com/ahsbJ9n9u5</a></p>&mdash; Jakub Macina (@dmacjam) <a href="https://twitter.com/dmacjam/status/1599035799260958722?ref_src=twsrc%5Etfw">December 3, 2022</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Research" /><category term="research" /><summary type="html"><![CDATA[Jakub Macina*, Kumar Shridhar*, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, and Mrinmaya Sachan.]]></summary></entry><entry><title type="html">[Paper] EMNLP22 Automatic Generation of Socratic Subquestions for Teaching Math Word Problems</title><link href="http://dmacjam.github.io/research/socratic-questions-paper/" rel="alternate" type="text/html" title="[Paper] EMNLP22 Automatic Generation of Socratic Subquestions for Teaching Math Word Problems" /><published>2022-10-01T00:00:00+00:00</published><updated>2022-10-01T00:00:00+00:00</updated><id>http://dmacjam.github.io/research/socratic-questions-paper</id><content type="html" xml:base="http://dmacjam.github.io/research/socratic-questions-paper/"><![CDATA[<p>Jakub Macina*, Kumar Shridhar*, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, and Mrinmaya Sachan.
In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4136–4149, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
<!--more--></p>

<p>We explore the ability of large language models (LMs) in generating sequential questions for guiding math word problem-solving. We propose various guided question generation schemes based on input conditioning and reinforcement learning (RL) and found that on both automatic and human quality evaluations, LMs constrained with desirable question properties generate superior questions and improve the overall performance of a math word problem solver.</p>

<p>Presented at EMNLP 2022 in Abu Dhabi as a main conference paper:</p>
<ul>
  <li><a href="https://s3.amazonaws.com/pf-user-files-01/u-59356/uploads/2022-11-09/3l53tdu/emnlp22-video.mp4">Video</a></li>
  <li><a href="https://aclanthology.org/2022.emnlp-main.277/">EMNLP 2022</a></li>
  <li><a href="https://arxiv.org/abs/2211.12835">Arxiv link</a></li>
  <li><a href="https://github.com/eth-nlped/scaffolding-generation">Github</a></li>
</ul>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">How good are NLP models at solving math word problems? To improve NLP models in reasoning, in our interdisciplinary research at the intersection of <a href="https://twitter.com/hashtag/NLP?src=hash&amp;ref_src=twsrc%5Etfw">#NLP</a> and <a href="https://twitter.com/hashtag/Education?src=hash&amp;ref_src=twsrc%5Etfw">#Education</a> we took inspiration from scaffolding theory which support human learners in discovering the answer on their own. <a href="https://t.co/ahsbJ9n9u5">pic.twitter.com/ahsbJ9n9u5</a></p>&mdash; Jakub Macina (@dmacjam) <a href="https://twitter.com/dmacjam/status/1599035799260958722?ref_src=twsrc%5Etfw">December 3, 2022</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Research" /><category term="research" /><summary type="html"><![CDATA[Jakub Macina*, Kumar Shridhar*, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, and Mrinmaya Sachan. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4136–4149, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.]]></summary></entry><entry><title type="html">[New position] Doctoral Fellow at ETH AI Center</title><link href="http://dmacjam.github.io/research/eth-ai-center/" rel="alternate" type="text/html" title="[New position] Doctoral Fellow at ETH AI Center" /><published>2021-09-01T00:00:00+00:00</published><updated>2021-09-01T00:00:00+00:00</updated><id>http://dmacjam.github.io/research/eth-ai-center</id><content type="html" xml:base="http://dmacjam.github.io/research/eth-ai-center/"><![CDATA[<p>I’m starting my Doctoral Fellowship at the <a href="https://ai.ethz.ch/">ETH AI Center</a> working on the intersection of Natural Language Processing (NLP) and Learning Sciences. As ETH’s central hub for artificial intelligence, the ETH AI Center fosters research excellence, industry innovation, and AI entrepreneurship to promote trustworthy, accessible, and inclusive AI systems.
<!--more--></p>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Research" /><category term="research" /><summary type="html"><![CDATA[I’m starting my Doctoral Fellowship at the ETH AI Center working on the intersection of Natural Language Processing (NLP) and Learning Sciences. As ETH’s central hub for artificial intelligence, the ETH AI Center fosters research excellence, industry innovation, and AI entrepreneurship to promote trustworthy, accessible, and inclusive AI systems.]]></summary></entry><entry><title type="html">[Attending] NeurIPS 2020</title><link href="http://dmacjam.github.io/research/neurips20/" rel="alternate" type="text/html" title="[Attending] NeurIPS 2020" /><published>2020-12-06T00:00:00+00:00</published><updated>2020-12-06T00:00:00+00:00</updated><id>http://dmacjam.github.io/research/neurips20</id><content type="html" xml:base="http://dmacjam.github.io/research/neurips20/"><![CDATA[<p>I attended <a href="https://nips.cc/Conferences/2020">NeurIPS 2020</a> research conference which was held virtually.
<!--more--></p>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Research" /><category term="research" /><summary type="html"><![CDATA[I attended NeurIPS 2020 research conference which was held virtually.]]></summary></entry><entry><title type="html">[Attending] RecSys 2020</title><link href="http://dmacjam.github.io/research/recsys20/" rel="alternate" type="text/html" title="[Attending] RecSys 2020" /><published>2020-09-22T00:00:00+00:00</published><updated>2020-09-22T00:00:00+00:00</updated><id>http://dmacjam.github.io/research/recsys20</id><content type="html" xml:base="http://dmacjam.github.io/research/recsys20/"><![CDATA[<p>I attended <a href="https://recsys.acm.org/recsys20/">RecSys 2020</a> research conference which was held virtually.
<!--more--></p>

<iframe src="https://www.linkedin.com/embed/feed/update/urn:li:share:6716256921462038528" height="644" width="504" frameborder="0" allowfullscreen="" title="Embedded post"></iframe>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Research" /><category term="research" /><summary type="html"><![CDATA[I attended RecSys 2020 research conference which was held virtually.]]></summary></entry><entry><title type="html">[Talk] Vienna Deep Learning Meetup</title><link href="http://dmacjam.github.io/talk/deep-learning-vienna/" rel="alternate" type="text/html" title="[Talk] Vienna Deep Learning Meetup" /><published>2019-09-24T00:00:00+00:00</published><updated>2019-09-24T00:00:00+00:00</updated><id>http://dmacjam.github.io/talk/deep-learning-vienna</id><content type="html" xml:base="http://dmacjam.github.io/talk/deep-learning-vienna/"><![CDATA[<p>Talk: Deep Learning for Recommender Systems by Jakub Mačina, Machine Learning Engineer, Exponea
<!--more--></p>

<p>Recommender systems are driving business value through personalisation for customers of Amazon, Netflix or Spotify. This talk will provide an overview of traditional and deep learning recommender system approaches and highlight the challenges encountered by industry practitioners such as extreme data sparsity. Real-world case study will show how to capture users varying tastes and products into a dense (latent) embeddings representation in order to design a scalable recommender system architecture.</p>

<p>Find out more about the event here:</p>

<ul>
  <li><a href="https://www.meetup.com/Vienna-Deep-Learning-Meetup/events/264243783/">Meetup description</a></li>
  <li><a href="https://github.com/vdlm/meetups">Github</a></li>
</ul>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The Talk &quot;Deep Learning for Recommender Systems&quot; by Jakub Mačina <a href="https://twitter.com/dmacjam?ref_src=twsrc%5Etfw">@dmacjam</a> at the 29th <a href="https://twitter.com/hashtag/Vienna?src=hash&amp;ref_src=twsrc%5Etfw">#Vienna</a> <a href="https://twitter.com/hashtag/DeepLearning?src=hash&amp;ref_src=twsrc%5Etfw">#DeepLearning</a> <a href="https://twitter.com/hashtag/Meetup?src=hash&amp;ref_src=twsrc%5Etfw">#Meetup</a> is available on Youtube: <a href="https://t.co/fqhDaoOmcW">https://t.co/fqhDaoOmcW</a> <a href="https://twitter.com/hashtag/VDLM?src=hash&amp;ref_src=twsrc%5Etfw">#VDLM</a> <a href="https://twitter.com/hashtag/ai?src=hash&amp;ref_src=twsrc%5Etfw">#ai</a> <a href="https://twitter.com/hashtag/artificialintelligence?src=hash&amp;ref_src=twsrc%5Etfw">#artificialintelligence</a> <a href="https://twitter.com/hashtag/recommendersystems?src=hash&amp;ref_src=twsrc%5Etfw">#recommendersystems</a></p>&mdash; Tom Lidy (@LidyTom) <a href="https://twitter.com/LidyTom/status/1178943472201588737?ref_src=twsrc%5Etfw">October 1, 2019</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<iframe class="center" width="560" height="315" src="https://www.youtube.com/embed/mTG-AZnhV10" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Talk" /><category term="talk" /><summary type="html"><![CDATA[Talk: Deep Learning for Recommender Systems by Jakub Mačina, Machine Learning Engineer, Exponea]]></summary></entry><entry><title type="html">[Attending] RecSys 2019</title><link href="http://dmacjam.github.io/research/recsys19/" rel="alternate" type="text/html" title="[Attending] RecSys 2019" /><published>2019-09-16T00:00:00+00:00</published><updated>2019-09-16T00:00:00+00:00</updated><id>http://dmacjam.github.io/research/recsys19</id><content type="html" xml:base="http://dmacjam.github.io/research/recsys19/"><![CDATA[<p>I attended <a href="https://recsys.acm.org/recsys19/">RecSys 2019</a> research conference in Copenhagen, Denmark.
<!--more--></p>

<iframe src="https://www.linkedin.com/embed/feed/update/urn:li:share:6580733109606715392" height="758" width="504" frameborder="0" allowfullscreen="" title="Embedded post"></iframe>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Research" /><category term="research" /><summary type="html"><![CDATA[I attended RecSys 2019 research conference in Copenhagen, Denmark.]]></summary></entry><entry><title type="html">[Attending] RecSys 2018</title><link href="http://dmacjam.github.io/research/recsys18/" rel="alternate" type="text/html" title="[Attending] RecSys 2018" /><published>2018-10-02T00:00:00+00:00</published><updated>2018-10-02T00:00:00+00:00</updated><id>http://dmacjam.github.io/research/recsys18</id><content type="html" xml:base="http://dmacjam.github.io/research/recsys18/"><![CDATA[<p>I attended <a href="https://recsys.acm.org/recsys18/">RecSys 2018</a> research conference in Vancouver, Canada.
<!--more--></p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">I especially like <a href="https://twitter.com/hashtag/recsys?src=hash&amp;ref_src=twsrc%5Etfw">#recsys</a> conference because of a great and welcoming community, see you <a href="https://twitter.com/hashtag/recsys2018?src=hash&amp;ref_src=twsrc%5Etfw">#recsys2018</a> <a href="https://t.co/6wMafEPSkj">pic.twitter.com/6wMafEPSkj</a></p>&mdash; Jakub Macina (@dmacjam) <a href="https://twitter.com/dmacjam/status/1049087850703028224?ref_src=twsrc%5Etfw">October 8, 2018</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>]]></content><author><name>Jakub Macina</name><email>jakub@macina.sk</email></author><category term="Research" /><category term="research" /><summary type="html"><![CDATA[I attended RecSys 2018 research conference in Vancouver, Canada.]]></summary></entry></feed>