Posit AI Weblog: torch 0.11.0

on

|

views

and

comments



torch v0.11.0 is now on CRAN! This weblog submit highlights a few of the modifications included
on this launch. However you may at all times discover the total changelog
on the torch web site.

Improved loading of state dicts

For a very long time it has been attainable to make use of torch from R to load state dicts (i.e. 
mannequin weights) skilled with PyTorch utilizing the load_state_dict() perform.
Nonetheless, it was widespread to get the error:

Error in cpp_load_state_dict(path) :  isGenericDict() INTERNAL ASSERT FAILED at

This occurred as a result of when saving the state_dict from Python, it wasn’t actually
a dictionary, however an ordered dictionary. Weights in PyTorch are serialized as Pickle recordsdata – a Python-specific format just like our RDS. To load them in C++, with out a Python runtime,
LibTorch implements a pickle reader that’s capable of learn solely a subset of the
file format, and this subset didn’t embody ordered dicts.

This launch provides assist for studying the ordered dictionaries, so that you received’t see
this error any longer.

Apart from that, studying theses recordsdata requires half of the height reminiscence utilization, and in
consequence additionally is way quicker. Listed below are the timings for studying a 3B parameter
mannequin (StableLM-3B) with v0.10.0:

system.time({
  x <- torch::load_state_dict("~/Downloads/pytorch_model-00001-of-00002.bin")
  y <- torch::load_state_dict("~/Downloads/pytorch_model-00002-of-00002.bin")
})
   person  system elapsed 
662.300  26.859 713.484 

and with v0.11.0

   person  system elapsed 
  0.022   3.016   4.016 

Which means that we went from minutes to only a few seconds.

Utilizing JIT operations

Some of the widespread methods of extending LibTorch/PyTorch is by implementing JIT
operations. This enables builders to put in writing customized, optimized code in C++ and
use it immediately in PyTorch, with full assist for JIT tracing and scripting.
See our ‘Torch outdoors the field’
weblog submit if you wish to study extra about it.

Utilizing JIT operators in R used to require bundle builders to implement C++/Rcpp
for every operator in the event that they needed to have the ability to name them from R immediately.
This launch added assist for calling JIT operators with out requiring authors to
implement the wrappers.

The one seen change is that we now have a brand new image within the torch namespace, referred to as
jit_ops. Let’s load torchvisionlib, a torch extension that registers many various
JIT operations. Simply loading the bundle with library(torchvisionlib) will make
its operators out there for torch to make use of – it is because the mechanism that registers
the operators acts when the bundle DLL (or shared library) is loaded.

For example, let’s use the read_file operator that effectively reads a file
right into a uncooked (bytes) torch tensor.

library(torchvisionlib)
torch::jit_ops$picture$read_file("img.png")
torch_tensor
 137
  80
  78
  71
 ...
   0
   0
 103
... [the output was truncated (use n=-1 to disable)]
[ CPUByteType{325862} ]

We’ve made it so autocomplete works properly, such that you could interactively discover the out there
operators utilizing jit_ops$ and urgent to set off RStudio’s autocomplete.

Different small enhancements

This launch additionally provides many small enhancements that make torch extra intuitive:

  • Now you can specify the tensor dtype utilizing a string, eg: torch_randn(3, dtype = "float64"). (Beforehand you needed to specify the dtype utilizing a torch perform, comparable to torch_float64()).

    torch_randn(3, dtype = "float64")
    torch_tensor
    -1.0919
     1.3140
     1.3559
    [ CPUDoubleType{3} ]
  • Now you can use with_device() and local_device() to quickly modify the gadget
    on which tensors are created. Earlier than, you had to make use of gadget in every tensor
    creation perform name. This enables for initializing a module on a selected gadget:

    with_device(gadget="mps", {
      linear <- nn_linear(10, 1)
    })
    linear$weight$gadget
    torch_device(sort='mps', index=0)
  • It’s now attainable to quickly modify the torch seed, which makes creating
    reproducible applications simpler.

    with_torch_manual_seed(seed = 1, {
      torch_randn(1)
    })
    torch_tensor
     0.6614
    [ CPUFloatType{1} ]

Thanks to all contributors to the torch ecosystem. This work wouldn’t be attainable with out
all of the useful points opened, PRs you created, and your arduous work.

In case you are new to torch and need to study extra, we extremely suggest the lately introduced ebook ‘Deep Studying and Scientific Computing with R torch’.

If you wish to begin contributing to torch, be happy to succeed in out on GitHub and see our contributing information.

The total changelog for this launch may be discovered right here.

Picture by Ian Schneider on Unsplash

Reuse

Textual content and figures are licensed below Inventive Commons Attribution CC BY 4.0. The figures which were reused from different sources do not fall below this license and may be acknowledged by a observe of their caption: “Determine from …”.

Quotation

For attribution, please cite this work as

Falbel (2023, June 7). Posit AI Weblog: torch 0.11.0. Retrieved from https://blogs.rstudio.com/tensorflow/posts/2023-06-07-torch-0-11/

BibTeX quotation

@misc{torch-0-11-0,
  creator = {Falbel, Daniel},
  title = {Posit AI Weblog: torch 0.11.0},
  url = {https://blogs.rstudio.com/tensorflow/posts/2023-06-07-torch-0-11/},
  12 months = {2023}
}
Share this
Tags

Must-read

Nvidia CEO reveals new ‘reasoning’ AI tech for self-driving vehicles | Nvidia

The billionaire boss of the chipmaker Nvidia, Jensen Huang, has unveiled new AI know-how that he says will assist self-driving vehicles assume like...

Tesla publishes analyst forecasts suggesting gross sales set to fall | Tesla

Tesla has taken the weird step of publishing gross sales forecasts that recommend 2025 deliveries might be decrease than anticipated and future years’...

5 tech tendencies we’ll be watching in 2026 | Expertise

Hi there, and welcome to TechScape. I’m your host, Blake Montgomery, wishing you a cheerful New Yr’s Eve full of cheer, champagne and...

Recent articles

More like this

LEAVE A REPLY

Please enter your comment!
Please enter your name here