eg soft actor critic which is entropy regularization.
LASSO is gain popularity

Not true. There are still lots of recent developments and lots of open questions.
You probably refer to BickelRitovTsybakov, 2009, but the only new result in that paper was probably the equivalence with The Dantzig selector.
So what is the future of metrics, deep learning. Yeah, itâ€™s something really new.
see more and more of this approach. Is it the future of 'metrics?
Statisticians think that the last interesting lasso result was proven about 10 years ago. It's old stuff to them.

Yeah in typical machine learning classes they cover OLS in one day and then quickly move onto MLE/Lasso/ridge which should only take a week.
While econ phds stay on OLS for years and derive every useless property about it.This. We actually spent a week on OLS and Ridge Regression.

Yeah in typical machine learning classes they cover OLS in one day and then quickly move onto MLE/Lasso/ridge which should only take a week.
While econ phds stay on OLS for years and derive every useless property about it.when do they get to GMM?
Towards middle when it comes to unsurprovised learning. I assume you mean Gaussian Mixture Model?

Yeah in typical machine learning classes they cover OLS in one day and then quickly move onto MLE/Lasso/ridge which should only take a week.
While econ phds stay on OLS for years and derive every useless property about it.when do they get to GMM?
Towards middle when it comes to unsurprovised learning. I assume you mean Gaussian Mixture Model?
No. The econometrician meant Generalized Method of Moments. But don't worry. In 2039, they will understand things like dropout, class dropout, and similar and, at that moment, they will understand...hey CS people were phacking in 2020! But this is the beauty of understanding both worlds. Keep your regularization based on lasso and publish in ECTA. Outstanding reputation for econometrics.

what about this http://www.tsc.uc3m.es/~fernando/estimating.pdf: estimating GARCH models using Support Vector Machines (SVM)
I automatically recommend rejection if a paper uses machine learning methods for the sake of using the method. It tells me clearly they have no idea what they are doing.

I automatically recommend rejection if a paper uses machine learning methods for the sake of using the method. It tells me clearly they have no idea what they are doing.
Depends. A good strategy is to look at previous pubs from the author. If he/she has code in GitHub available for anyone interested in doing a replication or using it in another dataset, give the benefit of the doubt. Another good strategy is to ask for robustness by resorting to unsupervised ML and Bayesian ML. This is enough to see if the researcher understands what he/she is doing.

what about this http://www.tsc.uc3m.es/~fernando/estimating.pdf: estimating GARCH models using Support Vector Machines (SVM)
I automatically recommend rejection if a paper uses machine learning methods for the sake of using the method. It tells me clearly they have no idea what they are doing.
It's just another technique. If you have a small sample size, my suggestion is to always test Bayesian or Gaussian techniques. Otherwise, I would recommend leapfrog and move directly to artificial neural networks. Well, I may sound very critical but as a reviewer I accepted all papers I've reviewed in ML as long as code is publicly available in GitHub. The reason is simple: you can literally do whatever you want with the infinit number of available techniques.

what about this http://www.tsc.uc3m.es/~fernando/estimating.pdf: estimating GARCH models using Support Vector Machines (SVM)
I automatically recommend rejection if a paper uses machine learning methods for the sake of using the method. It tells me clearly they have no idea what they are doing.
I have more sympathy for these methods for prediction than causal inference. OLS performs poorly on predictions out of sample. These methods can improve. So, I do think this paper is fine. I do not think this paper contributes much of any use, but it is 2003 so this might have been more novel then.

what about this http://www.tsc.uc3m.es/~fernando/estimating.pdf: estimating GARCH models using Support Vector Machines (SVM)
I automatically recommend rejection if a paper uses machine learning methods for the sake of using the method. It tells me clearly they have no idea what they are doing.
It's just another technique. If you have a small sample size, my suggestion is to always test Bayesian or Gaussian techniques. Otherwise, I would recommend leapfrog and move directly to artificial neural networks. Well, I may sound very critical but as a reviewer I accepted all papers I've reviewed in ML as long as code is publicly available in GitHub. The reason is simple: you can literally do whatever you want with the infinit number of available techniques.
I think this sums up my feelings on these papers. "It's just another technique." Like I said, if you are doing the technique for the sake of the technique, then I do not think it makes enough of a contribution, generally speaking of course.

what about this http://www.tsc.uc3m.es/~fernando/estimating.pdf: estimating GARCH models using Support Vector Machines (SVM)
I automatically recommend rejection if a paper uses machine learning methods for the sake of using the method. It tells me clearly they have no idea what they are doing.
It's just another technique. If you have a small sample size, my suggestion is to always test Bayesian or Gaussian techniques. Otherwise, I would recommend leapfrog and move directly to artificial neural networks. Well, I may sound very critical but as a reviewer I accepted all papers I've reviewed in ML as long as code is publicly available in GitHub. The reason is simple: you can literally do whatever you want with the infinit number of available techniques.
I think this sums up my feelings on these papers. "It's just another technique." Like I said, if you are doing the technique for the sake of the technique, then I do not think it makes enough of a contribution, generally speaking of course.
Correct. My perspective is similar, but I think that it's a good starting point for younger researchers. Adding some extra particularity to a certain technique is the standard in CS. This is the reason behind so many papers in this field...we do not write the year in references...we write the year and the month. In addition, I see this as a form of democratization of the academic world. In opposition, econometrics is a field circumscribed to a restrictive number of researchers that push mathematical treatment of something interesting 20 years ago. This is the same of sending a wrong message to younger people. They respect your work, but are unable to understand its irrelevance. It's just the opinion of someone who started in econometrics and that saw a new outstanding world in CS. Naturally, subject to bias, at least to some extent.

what about this http://www.tsc.uc3m.es/~fernando/estimating.pdf: estimating GARCH models using Support Vector Machines (SVM)
I automatically recommend rejection if a paper uses machine learning methods for the sake of using the method. It tells me clearly they have no idea what they are doing.
^This has already been done. Published in the ML literature some years back.

what about this http://www.tsc.uc3m.es/~fernando/estimating.pdf: estimating GARCH models using Support Vector Machines (SVM)
I automatically recommend rejection if a paper uses machine learning methods for the sake of using the method. It tells me clearly they have no idea what they are doing.
^This has already been done. Published in the ML literature some years back.
The paper is from 200 and contains many stupid sentences (even for 2003's standards).

If people use the terms "machine learning", "machine learned", "trained", ... 20 times already in the abstract, then you know it's trash.
what about this http://www.tsc.uc3m.es/~fernando/estimating.pdf: estimating GARCH models using Support Vector Machines (SVM)
I automatically recommend rejection if a paper uses machine learning methods for the sake of using the method. It tells me clearly they have no idea what they are doing.
It's just another technique. If you have a small sample size, my suggestion is to always test Bayesian or Gaussian techniques. Otherwise, I would recommend leapfrog and move directly to artificial neural networks. Well, I may sound very critical but as a reviewer I accepted all papers I've reviewed in ML as long as code is publicly available in GitHub. The reason is simple: you can literally do whatever you want with the infinit number of available techniques.
I think this sums up my feelings on these papers. "It's just another technique." Like I said, if you are doing the technique for the sake of the technique, then I do not think it makes enough of a contribution, generally speaking of course.