For most tasks a REPL environment will be most convenient, there are truly few tasks that require the raw speed of native, compiled code.
Python is overrated af

Python is a very nice general purpose programming language. It is not made for econometrics and not useful for that purpose. I'll happily use it when there is a package that implements efficient regression with an arbitrary number of fixed effects and common choices for SE.

My CS prof said that if your role is not that of a software engineer or a very technical data scientist, then the best language is the one you already know or the one your company most widely makes use of. For an average economist that means it’s usually R or STATA or both. IMO there’s nothing wrong with this but if you’re serious about coding then you’re expected to know a more lowlevel language like C.

Am I a joke to you?
Sincerely,
Rhttps://qz.com/1417145/economicsnobellaureatepaulromerisapythonprogrammingconvert/
Python and R fit the same bill essentially: both open source, efficient package management systems, interpreted (slow to execute but fast to develop). Choice boils down to preference and/or third party packages. I know both and mostly use Python now. I think Python has a slight edge for those looking for non academic jobs as well. Some, including myself, dislike the direction R has taken lately (tidyverse), but are happy with base R.

Python can achieve practically the same speed as julia and c, fortran, but its productivity level is much much higher.
This is correct if using numpy and/or numba. In fact, for the marginal coder, it is easier to write fast code in Python than in Julia or C. Not long ago there was a thread here by someone who ported some C code to Julia only to find that Julia is faster  it is easier to write efficient code in Julia than in C, and by the same token it is easier to write fast code in Python.

Python is a very nice general purpose programming language. It is not made for econometrics and not useful for that purpose. I'll happily use it when there is a package that implements efficient regression with an arbitrary number of fixed effects and common choices for SE.
Pretty sure Python can handle this.

https://qz.com/1417145/economicsnobellaureatepaulromerisapythonprogrammingconvert/
Economics involves a lot of math and statistics. The most commonly used tools to crunch numbers are the spreadsheet software Microsoft Excel and programming languages Stata and Mathematica. These are the tools that tend to be taught in economics classrooms across the world. All three of them are proprietary and privately owned.
No mention of Matlab. Bad article lol

https://qz.com/1417145/economicsnobellaureatepaulromerisapythonprogrammingconvert/
Economics involves a lot of math and statistics. The most commonly used tools to crunch numbers are the spreadsheet software Microsoft Excel and programming languages Stata and Mathematica. These are the tools that tend to be taught in economics classrooms across the world. All three of them are proprietary and privately owned.
No mention of Matlab. Bad article lolIt's not bad it is giving Romer's perspective. Mathematica invented the notebook but the software was simply too expensive for widespread adoption.
A lot of Stata use is unlicensed pirated copies, because they have weak security. Matlab is gradually being replaced by Julia and its use was always a bit fringe anyway. Mathematica is losing users to Python. Excel is still used because many Universities haven't paid for alternatives like Mathematica and IT security people don't like people to use package managers and want everything installed on standardized desktops. So often the only thing available in Universities is Excel. This is changing as laptop adoption has increased and wifi has spread, so that computer labs have become redundant and people can choose what they install on their own machines. In the end they will choose what is best for them. Which will mean they will have a mix of different software and nothing will be standard.