Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    How to run RAG projects for better data analytics results

    October 13, 2025

    MacBook Air deal: Save 10% Apple’s slim M4 notebook

    October 13, 2025

    Part 1 – Energy as the Ultimate Bottleneck

    October 13, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Artificial Intelligence»Posit AI Blog: torch 0.9.0
    Artificial Intelligence

    Posit AI Blog: torch 0.9.0

    big tee tech hubBy big tee tech hubSeptember 13, 2025004 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Posit AI Blog: torch 0.9.0
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    We are happy to announce that torch v0.9.0 is now on CRAN. This version adds support for ARM systems running macOS, and brings significant performance improvements. This release also includes many smaller bug fixes and features. The full changelog can be found here.

    Performance improvements

    torch for R uses LibTorch as its backend. This is the same library that powers PyTorch – meaning that we should see very similar performance when
    comparing programs.

    However, torch has a very different design, compared to other machine learning libraries wrapping C++ code bases (e.g’, xgboost). There, the overhead is insignificant because there’s only a few R function calls before we start training the model; the whole training then happens without ever leaving C++. In torch, C++ functions are wrapped at the operation level. And since a model consists of multiple calls to operators, this can render the R function call overhead more substantial.

    We have established a set of benchmarks, each trying to identify performance bottlenecks in specific torch features. In some of the benchmarks we were able to make the new version up to 250x faster than the last CRAN version. In Figure 1 we can see the relative performance of torch v0.9.0 and torch v0.8.1 in each of the benchmarks running on the CUDA device:


    Relative performance of v0.8.1 vs v0.9.0 on the CUDA device. Relative performance is measured by (new_time/old_time)^-1.

    Figure 1: Relative performance of v0.8.1 vs v0.9.0 on the CUDA device. Relative performance is measured by (new_time/old_time)^-1.

    The main source of performance improvements on the GPU is due to better memory
    management, by avoiding unnecessary calls to the R garbage collector. See more details in
    the ‘Memory management’ article in the torch documentation.

    On the CPU device we have less expressive results, even though some of the benchmarks
    are 25x faster with v0.9.0. On CPU, the main bottleneck for performance that has been
    solved is the use of a new thread for each backward call. We now use a thread pool, making the backward and optim benchmarks almost 25x faster for some batch sizes.


    Relative performance of v0.8.1 vs v0.9.0 on the CPU device. Relative performance is measured by (new_time/old_time)^-1.

    Figure 2: Relative performance of v0.8.1 vs v0.9.0 on the CPU device. Relative performance is measured by (new_time/old_time)^-1.

    The benchmark code is fully available for reproducibility. Although this release brings
    significant improvements in torch for R performance, we will continue working on this topic, and hope to further improve results in the next releases.

    Support for Apple Silicon

    torch v0.9.0 can now run natively on devices equipped with Apple Silicon. When
    installing torch from a ARM R build, torch will automatically download the pre-built
    LibTorch binaries that target this platform.

    Additionally you can now run torch operations on your Mac GPU. This feature is
    implemented in LibTorch through the Metal Performance Shaders API, meaning that it
    supports both Mac devices equipped with AMD GPU’s and those with Apple Silicon chips. So far, it
    has only been tested on Apple Silicon devices. Don’t hesitate to open an issue if you
    have problems testing this feature.

    In order to use the macOS GPU, you need to place tensors on the MPS device. Then,
    operations on those tensors will happen on the GPU. For example:

    x <- torch_randn(100, 100, device="mps")
    torch_mm(x, x)

    If you are using nn_modules you also need to move the module to the MPS device,
    using the $to(device="mps") method.

    Note that this feature is in beta as
    of this blog post, and you might find operations that are not yet implemented on the
    GPU. In this case, you might need to set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1, so torch automatically uses the CPU as a fallback for
    that operation.

    Other

    Many other small changes have been added in this release, including:

    • Update to LibTorch v1.12.1
    • Added torch_serialize() to allow creating a raw vector from torch objects.
    • torch_movedim() and $movedim() are now both 1-based indexed.

    Read the full changelog available here.

    Enjoy this blog? Get notified of new posts by email:

    Posts also available at r-bloggers

    Reuse

    Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don’t fall under this license and can be recognized by a note in their caption: “Figure from …”.

    Citation

    For attribution, please cite this work as

    Falbel (2022, Oct. 25). Posit AI Blog: torch 0.9.0. Retrieved from 

    BibTeX citation

    @misc{torch-0-9-0,
      author = {Falbel, Daniel},
      title = {Posit AI Blog: torch 0.9.0},
      url = {},
      year = {2022}
    }



    Source link

    0.9.0 Blog Posit torch
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    From Static Products to Dynamic Systems

    October 13, 2025

    Posit AI Blog: Introducing the text package

    October 12, 2025

    Data Reliability Explained | Databricks Blog

    October 12, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    How to run RAG projects for better data analytics results

    October 13, 2025

    MacBook Air deal: Save 10% Apple’s slim M4 notebook

    October 13, 2025

    Part 1 – Energy as the Ultimate Bottleneck

    October 13, 2025

    From Static Products to Dynamic Systems

    October 13, 2025
    Advertisement
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    How to run RAG projects for better data analytics results

    October 13, 2025

    MacBook Air deal: Save 10% Apple’s slim M4 notebook

    October 13, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.