For macro Matlab. Or if you do not have Matlab, Julia.Matlab is dead. Still a lot of code out there but it does not make sense to start new projects with Matlab now that there are superior open source solutions.
^this one gets it
the speed difference between julia and python with numba+parallelism is negligible, while at the same time python gives you much higher productivity.
This. Python also has a very good C FFI (ctypes) which allows to call native code easily and map memory segments to numpy arrays. This may be of interest for say evaluating the likelihood of some complicated model. This approach works very well in practice and it is what is done in commercial settings: the core libraries are in C/C++/Fortran and the "end user" accesses functionality through a Python API.
lmao, what a silly comment.
Julia suffers from the problem faced by all young languages, i.e. the comparative lack of long-established, trusted libraries for linear algebra, optimization, etc.
Not silly at all. Unlike most other platforms (such Python/R) which use old, trusted LAPACK, LINPACK etc. routines Julia have reimplemented a lot of procedures in Julia itself. This eases potential modifications but also inevitably introduces new bugs.
It's the same algorithms. You're just a pe/t/ty littl tur/d.
lmao, what a silly comment.Julia suffers from the problem faced by all young languages, i.e. the comparative lack of long-established, trusted libraries for linear algebra, optimization, etc.
Not silly at all. Unlike most other platforms (such Python/R) which use old, trusted LAPACK, LINPACK etc. routines Julia have reimplemented a lot of procedures in Julia itself. This eases potential modifications but also inevitably introduces new bugs.
FFI? oh god. the horror.
the speed difference between julia and python with numba+parallelism is negligible, while at the same time python gives you much higher productivity.This. Python also has a very good C FFI (ctypes) which allows to call native code easily and map memory segments to numpy arrays. This may be of interest for say evaluating the likelihood of some complicated model. This approach works very well in practice and it is what is done in commercial settings: the core libraries are in C/C++/Fortran and the "end user" accesses functionality through a Python API.
the speed difference between julia and python with numba+parallelism is negligible, while at the same time python gives you much higher productivity.This. Python also has a very good C FFI (ctypes) which allows to call native code easily and map memory segments to numpy arrays. This may be of interest for say evaluating the likelihood of some complicated model. This approach works very well in practice and it is what is done in commercial settings: the core libraries are in C/C++/Fortran and the "end user" accesses functionality through a Python API.
the advantage of julia is that you don't need to know any of this gibberish to write code that is almost as fast as Fortran. There are two simple rules for fast Julia code: 1) write everything within functions, and 2) make everything type-stable. 2) is a little tricky to understand in the beginning, but a few weeks of coding your own model you will quickly get it. I
you have to write a very non-python code to get Julia as fast as C/Fortran (type declarations etc). So why use Julia when you get the same speed with python, while at the same time, python comes with much greater base of users and packages. Julia is still a very niche thing and I wouldn't learn it at that point. Perhaps in the future, but who knows what happens in 10 years.
the speed difference between julia and python with numba+parallelism is negligible, while at the same time python gives you much higher productivity.This. Python also has a very good C FFI (ctypes) which allows to call native code easily and map memory segments to numpy arrays. This may be of interest for say evaluating the likelihood of some complicated model. This approach works very well in practice and it is what is done in commercial settings: the core libraries are in C/C++/Fortran and the "end user" accesses functionality through a Python API.
the advantage of julia is that you don't need to know any of this gibberish to write code that is almost as fast as Fortran. There are two simple rules for fast Julia code: 1) write everything within functions, and 2) make everything type-stable. 2) is a little tricky to understand in the beginning, but a few weeks of coding your own model you will quickly get it. I
Also Julia loves loops, so the code becomes simpler and more readable as well as faster
These are the classic undergrad questions.
When they get someday off and they fail on tinder to hit someone then they think about doing something that would give him a feeling of superiority over the people who actually scored on tinder.
So they come over here, pretends to be a PhD student/ junior professor and ask a stupeeed question which has been answered 1000times but he thinks his problem is unique and requires extra attention. Gtfo pleasw
https://towardsdatascience.com/machine-learning-in-python-vs-julia-is-julia-faster-dc4f7e9e74dbBottom line: the code that runs fast in Python is actually written in C by professional programmers.
Yes, and the point is precisely to leverage those fast, trusted libraries as much as possible in applied economic work. But it also shows the C FFI approach described above is eminently sensible if one hits a bottleneck with Python.
The 20-something code-chimpanzees who lack a high school education will say Julia.
The PhD researchers will so the language doesn't matter. Use the one that already has robust, bug-free, reliable libraries that have been used and tested by thousands of others, that does what you need so you can spend all of your time on your research rather than reinventing the wheel, and writing and debugging code.
This. Now get back to doing research