This article outlines the implementation details for RECKONING, which uses a GPT-2-base model and runs on NVIDIA A100 GPUs.This article outlines the implementation details for RECKONING, which uses a GPT-2-base model and runs on NVIDIA A100 GPUs.

Technical Setup for RECKONING: Inner Loop Gradient Steps, Learning Rates, and Hardware Specification

2025/10/29 23:29

Abstract and 1. Introduction

  1. Background

  2. Method

  3. Experiments

    4.1 Multi-hop Reasoning Performance

    4.2 Reasoning with Distractors

    4.3 Generalization to Real-World knowledge

    4.4 Run-time Analysis

    4.5 Memorizing Knowledge

  4. Related Work

  5. Conclusion, Acknowledgements, and References

\ A. Dataset

B. In-context Reasoning with Distractors

C. Implementation Details

D. Adaptive Learning Rate

E. Experiments with Large Language Models

C Implementation Details

We select GPT-2-base [59] as the model for our method and all the baselines. We use the version implemented by the Huggingface Transformers library [78]. All the experiments for RECKONING

\ Table 6: Dataset splits and statistics for our experiments

\ Table 7: An example from the dataset ProofWriter. There are 6 facts and 6 rules mapped to three question-answer pairs. Each question can be answered based on the given facts and rules.

are conducted on a cluster with NVIDIA A100 (40GB) GPUs. All the baseline experiments are conducted on a local machine with NVIDIA RTX 3090 GPU (24GB).

\ Fine-tuned In-context Reasoning We set the train batch size to 16 and train the model for 6 epochs with early stopping based on the validation label accuracy. We set the learning rate to 3e-5 and use the AdamW optimizer with ϵ set to 1e-8. We validate the model on the development set for every epoch and select the best checkpoint using the validation accuracy as the metric.

\ RECKONING In the inner loop, we generally perform 4 gradient steps for lower-hop questions (2, 3, 4-hop) and 5 gradient steps for higher-hop questions (5 and 6-hop). We select the AdamW [46] as the optimizer for the inner loop since the main task is language modeling. The inner-loop learning rate is set to 3e-5 before training, and the algorithm dynamically learns a set of optimal learning rates when converged. In our experiments and analysis, we only report the results from RECKONING with a multi-task objective since its performance is better than the single-task objective. In the outer loop, we also use the AdamW with a learning rate of 3e-5. For both optimizers, we set ϵ to 1e-8. We set the train batch size to 2 due to memory limitations. We apply the technique of gradient accumulation and set the accumulation step to 2. We train the model for 6 epochs with early stopping. For each epoch, we validate the model twice: once in the middle and once at the end. We select the best model checkpoint based on the validation label accuracy

\

:::info Authors:

(1) Zeming Chen, EPFL (zeming.chen@epfl.ch);

(2) Gail Weiss, EPFL (antoine.bosselut@epfl.ch);

(3) Eric Mitchell, Stanford University (eric.mitchell@cs.stanford.edu)';

(4) Asli Celikyilmaz, Meta AI Research (aslic@meta.com);

(5) Antoine Bosselut, EPFL (antoine.bosselut@epfl.ch).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Latest Ripple News As CEO Says XRP ETFs Inevitable By 2026 and XRP Price Prediction

Latest Ripple News As CEO Says XRP ETFs Inevitable By 2026 and XRP Price Prediction

The post Latest Ripple News As CEO Says XRP ETFs Inevitable By 2026 and XRP Price Prediction appeared on BitcoinEthereumNews.com. Ripple news is back in the spotlight, and the excitement around it is growing. Some analysts believe 2025 to be the breakout year XRP has been waiting for while some believe Layer Brett is the token to invest in. Let’s take a closer look at the latest Ripple news and what analysts are saying about XRP’s price prediction. Ripple news: Garlinghouse predicts XRP ETF approval Ripple’s CEO, Brad Garlinghouse, recently spoke to Bloomberg, where he made a bold prediction: the SEC will approve XRP ETFs by the end of 2025. Several institutions, including Bitwise, Franklin Templeton, and Canary, have already filed for an XRP ETF, signaling a growing interest in XRP from big players in the financial world. This approval seems increasingly likely, with Polymarket data showing a 96% chance of approval, up from 65% earlier this year. Experts like James Seyffart and Eric Balchunas have also backed this optimistic outlook. If the ETF gets the green light, billions in capital could flow into XRP, pushing its price higher. XRP price prediction: Is a breakout coming for Ripple’s token? XRP’s price is hovering around $3.02, but there’s a strong belief that it could go much higher. Analysts are looking at key levels of resistance, with the first big target being $3.67. If XRP breaks past that, it could head toward $5, which would represent a significant move from its current price. Historically, XRP has done well when liquidity increases in the market, and with institutional interest growing, there’s a good chance the price will move upwards. The top token really catching investors attention: Layer Brett While XRP is generating plenty of excitement, there’s another coin that investors are starting to pay attention to: Layer Brett ($LBRETT). Layer Brett is a next-generation Ethereum-based Layer 2 meme coin, and it’s making a…
Share
BitcoinEthereumNews2025/09/20 22:12