You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looking at the output of ocamlobjinfo on the mpi.cma archive gives a clue:
File /home/xleroy/.opam/4.14.0/lib/mpi/mpi.cma
Force custom: YES
Extra C object files: -lcamlmpi -lmpi
Extra C options: -L/usr/lib/x86_64-linux-gnu
Extra dynamically-loaded libraries:
The library does not provide its OCaml-C stub code as a DLL, so that's why it cannot be loaded in a toplevel REPL. I would hope we could have the stub code as a DLL, if we used ocamlmklib to build mpi.cma. But there may be obscure MPI-specific problems.
TODO: try to build mpi.cma and mpi.cmxa using ocamlmklib.
Also, if you're using a distributed implementation of MPI (e.g. on a cluster), I'm not sure what happens if you use it interactively via the toplevel REPL on one node and non-interactively(?) on the other nodes.
Is it possible to use mpi via the toplevel (ocaml, utop, or jupyter)? I get the the following error message:
I am using the '4.14.0+domains+flambda' opam switch.
The text was updated successfully, but these errors were encountered: