Close Menu
  • Home
  • AI
  • Big Data
  • Cloud Computing
  • iOS Development
  • IoT
  • IT/ Cybersecurity
  • Tech
    • Nanotechnology
    • Green Technology
    • Apple
    • Software Development
    • Software Engineering

Subscribe to Updates

Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

    What's Hot

    Enabling This One iPhone Feature Has Helped Me Get More Precious, Quality Sleep

    July 16, 2025

    Leading EV battery manufacturers falling short on emissions reductions, says Greenpeace report

    July 16, 2025

    N-doped carbon dots for dual-modality NIR fluorescence imaging and photothermal therapy | Journal of Nanobiotechnology

    July 16, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Big Tee Tech Hub
    • Home
    • AI
    • Big Data
    • Cloud Computing
    • iOS Development
    • IoT
    • IT/ Cybersecurity
    • Tech
      • Nanotechnology
      • Green Technology
      • Apple
      • Software Development
      • Software Engineering
    Big Tee Tech Hub
    Home»Artificial Intelligence»Posit AI Blog: torch 0.10.0
    Artificial Intelligence

    Posit AI Blog: torch 0.10.0

    big tee tech hubBy big tee tech hubJuly 13, 2025005 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Posit AI Blog: torch 0.10.0
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    We are happy to announce that torch v0.10.0 is now on CRAN. In this blog post we
    highlight some of the changes that have been introduced in this version. You can
    check the full changelog here.

    Automatic Mixed Precision

    Automatic Mixed Precision (AMP) is a technique that enables faster training of deep learning models, while maintaining model accuracy by using a combination of single-precision (FP32) and half-precision (FP16) floating-point formats.

    In order to use automatic mixed precision with torch, you will need to use the with_autocast
    context switcher to allow torch to use different implementations of operations that can run
    with half-precision. In general it’s also recommended to scale the loss function in order to
    preserve small gradients, as they get closer to zero in half-precision.

    Here’s a minimal example, ommiting the data generation process. You can find more information in the amp article.

    ...
    loss_fn <- nn_mse_loss()$cuda()
    net <- make_model(in_size, out_size, num_layers)
    opt <- optim_sgd(net$parameters, lr=0.1)
    scaler <- cuda_amp_grad_scaler()
    
    for (epoch in seq_len(epochs)) {
      for (i in seq_along(data)) {
        with_autocast(device_type = "cuda", {
          output <- net(data[[i]])
          loss <- loss_fn(output, targets[[i]])  
        })
        
        scaler$scale(loss)$backward()
        scaler$step(opt)
        scaler$update()
        opt$zero_grad()
      }
    }

    In this example, using mixed precision led to a speedup of around 40%. This speedup is
    even bigger if you are just running inference, i.e., don’t need to scale the loss.

    Pre-built binaries

    With pre-built binaries, installing torch gets a lot easier and faster, especially if
    you are on Linux and use the CUDA-enabled builds. The pre-built binaries include
    LibLantern and LibTorch, both external dependencies necessary to run torch. Additionally,
    if you install the CUDA-enabled builds, the CUDA and
    cuDNN libraries are already included..

    To install the pre-built binaries, you can use:

    options(timeout = 600) # increasing timeout is recommended since we will be downloading a 2GB file.
    kind <- "cu117" # "cpu", "cu117" are the only currently supported.
    version <- "0.10.0"
    options(repos = c(
      torch = sprintf("https://storage.googleapis.com/torch-lantern-builds/packages/%s/%s/", kind, version),
      CRAN = " # or any other from which you want to install the other R dependencies.
    ))
    install.packages("torch")

    As a nice example, you can get up and running with a GPU on Google Colaboratory in
    less than 3 minutes!

    Colaboratory running torch
    Colaboratory running torch

    Speedups

    Thanks to an issue opened by @egillax, we could find and fix a bug that caused
    torch functions returning a list of tensors to be very slow. The function in case
    was torch_split().

    This issue has been fixed in v0.10.0, and relying on this behavior should be much
    faster now. Here’s a minimal benchmark comparing both v0.9.1 with v0.10.0:

    bench::mark(
      torch::torch_split(1:100000, split_size = 10)
    )

    With v0.9.1 we get:

    # A tibble: 1 × 13
      expression      min  median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc total_time
                       
    1 x             322ms   350ms      2.85     397MB     24.3     2    17      701ms
    # ℹ 4 more variables: result , memory , time , gc 

    while with v0.10.0:

    # A tibble: 1 × 13
      expression      min  median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc total_time
                       
    1 x              12ms  12.8ms      65.7     120MB     8.96    22     3      335ms
    # ℹ 4 more variables: result , memory , time , gc 

    Build system refactoring

    The torch R package depends on LibLantern, a C interface to LibTorch. Lantern is part of
    the torch repository, but until v0.9.1 one would need to build LibLantern in a separate
    step before building the R package itself.

    This approach had several downsides, including:

    • Installing the package from GitHub was not reliable/reproducible, as you would depend
      on a transient pre-built binary.
    • Common devtools workflows like devtools::load_all() wouldn’t work, if the user didn’t build
      Lantern before, which made it harder to contribute to torch.

    From now on, building LibLantern is part of the R package-building workflow, and can be enabled
    by setting the BUILD_LANTERN=1 environment variable. It’s not enabled by default, because
    building Lantern requires cmake and other tools (specially if building the with GPU support),
    and using the pre-built binaries is preferable in those cases. With this environment variable set,
    users can run devtools::load_all() to locally build and test torch.

    This flag can also be used when installing torch dev versions from GitHub. If it’s set to 1,
    Lantern will be built from source instead of installing the pre-built binaries, which should lead
    to better reproducibility with development versions.

    Also, as part of these changes, we have improved the torch automatic installation process. It now has
    improved error messages to help debugging issues related to the installation. It’s also easier to customize
    using environment variables, see help(install_torch) for more information.

    Thank you to all contributors to the torch ecosystem. This work would not be possible without
    all the helpful issues opened, PRs you created and your hard work.

    If you are new to torch and want to learn more, we highly recommend the recently announced book ‘Deep Learning and Scientific Computing with R torch’.

    If you want to start contributing to torch, feel free to reach out on GitHub and see our contributing guide.

    The full changelog for this release can be found here.

    Enjoy this blog? Get notified of new posts by email:

    Posts also available at r-bloggers



    Source link

    0.10.0 Blog Posit torch
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    tonirufai
    big tee tech hub
    • Website

    Related Posts

    The three-layer AI strategy for supply chains

    July 15, 2025

    Reasoning reimagined: Introducing Phi-4-mini-flash-reasoning | Microsoft Azure Blog

    July 14, 2025

    Top 5 Generative AI Uses for Business Intelligence Success

    July 14, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Enabling This One iPhone Feature Has Helped Me Get More Precious, Quality Sleep

    July 16, 2025

    Leading EV battery manufacturers falling short on emissions reductions, says Greenpeace report

    July 16, 2025

    N-doped carbon dots for dual-modality NIR fluorescence imaging and photothermal therapy | Journal of Nanobiotechnology

    July 16, 2025

    Unpacking Claude’s System Prompt – O’Reilly

    July 15, 2025
    Advertisement
    About Us
    About Us

    Welcome To big tee tech hub. Big tee tech hub is a Professional seo tools Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of seo tools, with a focus on dependability and tools. We’re working to turn our passion for seo tools into a booming online website. We hope you enjoy our seo tools as much as we enjoy offering them to you.

    Don't Miss!

    Enabling This One iPhone Feature Has Helped Me Get More Precious, Quality Sleep

    July 16, 2025

    Leading EV battery manufacturers falling short on emissions reductions, says Greenpeace report

    July 16, 2025

    Subscribe to Updates

    Get the latest technology news from Bigteetechhub about IT, Cybersecurity and Big Data.

      • About Us
      • Contact Us
      • Disclaimer
      • Privacy Policy
      • Terms and Conditions
      © 2025 bigteetechhub.All Right Reserved

      Type above and press Enter to search. Press Esc to cancel.