Posts by: Arthur Charpentier
This week, in Istanbul, for the second training on data science, we’ve been discussing classification and regression models, but also visualisation. Including maps. And we did have a brief introduction to the  leaflet package, devtools::install_github("rstudio/leaflet") require(leaflet) To see what can be done with that package, we will use one more time the John Snow’s cholera […]
Read more...
If we grow a tree with standard functions in R, on the same dataset used to introduce classification tree in some previous post, > MYOCARDE=read.table( + "http://freakonometrics.free.fr/saporta.csv", + head=TRUE,sep=";") > library(rpart) > cart<-rpart(PRONO~.,data=MYOCARDE) we get > library(rpart.plot) > library(rattle) > prp(cart,type=2,extra=1) The first step is to split the first node (based on the whole dataset). […]
Read more...
Yesterday, I did mention a popular graph discussed when studying theoretical foundations of statistical learning. But there is usually another one, which is the following, As previously, it is a graph with the risk on the -axis, the red line being on the training sample, and the black line on the validation sample, as a […]
Read more...
While I was working on the Theory of Statistical Learning, and the concept of consistency, I found the following popular graph (e.g. from  thoses slides, here in French) The curve below is the error on the training sample, as a function of the size of the training sample. Above, it is the error on a […]
Read more...
So far, when discussing classification, we’ve been playing on my toy-dataset (actually, I should no claim it’s mine, it is inspired by the one used in the introduction of Boosting, by Robert Schapire and Yoav Freund). But in ral life, there are more observations, and more explanatory variables.With more than two explanatory variables, it starts […]
Read more...
In our data-science class, after discussing limitations of the logistic regression, e.g. the fact that the decision boundary line was a straight line, we’ve mentioned possible natural extensions. Let us consider our (now) standard dataset clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1)) clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2)) x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) z <- c(1,1,1,1,1,0,0,1,0,0) df <- data.frame(x,y,z) plot(x,y,pch=19,cex=2,col=clr1[z+1]) One […]
Read more...
Another popular technique for classification (or at least, which used to be popular) is the (linear) discriminant analysis, introduced by Ronald Fisher in 1936. Consider the same dataset as in our previous post > clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1)) > x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) > y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) > z <- c(1,1,1,1,1,0,0,1,0,0) > df <- data.frame(x,y,z) > plot(x,y,pch=19,cex=2,col=clr1[z+1]) […]
Read more...
We will start, in our Data Science course,  to discuss classification techniques (in the context of supervised models). Consider the following case, with 10 points, and two classes (red and blue) > clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1)) > clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2)) > x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) > y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) > z <- c(1,1,1,1,1,0,0,1,0,0) > df <- data.frame(x,y,z) […]
Read more...
Pour le devoir de série temporelles, les données fournies par Google trend sont hebdomadaires, ce qui peut rendre la modélisation compliquée. Comme on l’a évoqué en cours, il peut être plus simple de travailler sur des données mensuelles. La petite fonction suivante permet de transformer les données pour avoir des données mensuelles (avec des moyennes par mois, […]
Read more...
http://latex.codecogs.com/gif.latex?I(\lambda\boldsymbol{x})=I(\boldsymbol{x}) Yesterday, in the course on inequalities, I mentioned (too) briefly the 3-person Economy. I wanted to spend some time in a short post on visualisations of inequalities in such a context. As mentioned in the slides,  it is possible to use a ternary plot representation. In the case where we believe that the scale independence […]
Read more...
http://latex.codecogs.com/gif.latex?d_{\chi^2}^2(i_1,i_2)=\sum_j%20\frac{n}{n_{\cdot,j}}\left[\frac{n_{i_1,j}}{n_{i_1,\cdot}}-\frac{n_{i_2,j}}{n_{i_2,\cdot}}\right]^2 Hier soir, on voyait comment faire une analyse des correspondances à partir du tableau de contingence. Mais on peut avoir le problème autrement, à partir des individus. Certes, ces derniers ne sont pas observés (vraiment), mais peu importe. > data(HairEyeColor) > N = HairEyeColor[,,"Male"] + HairEyeColor[,,"Female"] > Hair = rep(rep(rownames(N),ncol(N)), + as.vector(N)) > Eye = […]
Read more...
Lors du dernier cours d’analyse des données, on était parti sur l’analyse (simple) des correspondance, à partir d’un tableau de contingence, pour deux variables qualitatives, On définit alors les effets marginaux, pour les lignes, et pour les colonnes, Prenons la matrice du tableau de contingence des variables couleurs des yeux / couleurs des cheveux, > […]
Read more...
http://latex.codecogs.com/gif.latex?theta Tomorrow, for the final lecture of the Mathematical Statistics course, I will try to illustrate – using Monte Carlo simulations – the difference between classical statistics, and the Bayesien approach. The (simple) way I see it is the following, for frequentists, a probability is a measure of the the frequency of repeated events, so the interpretation […]
Read more...
http://latex.codecogs.com/gif.latex?\theta Tomorrow, for the final lecture of the Mathematical Statistics course, I will try to illustrate – using Monte Carlo simulations – the difference between classical statistics, and the Bayesien approach. The (simple) way I see it is the following, for frequentists, a probability is a measure of the the frequency of repeated events, so the interpretation […]
Read more...
En discutant avec @tomroud la semaine passée, j’ai découvert l’article Notre élite est toujours excellente, c’est le reste de la classe qui ne suit pas sur le site du Monde. C’est effectivement une idée reçue : si la formation est globalement aussi faible en France, c’est parce que la France serait un pays élitiste, et la […]
Read more...
A few days ago, Jean-François Mignot published an interesting article entitled Tour de France 2014 : pourquoi le vainqueur gagne 100 fois plus que le 10e. In this article, we have the following graph, with the income of the cyclist, as a function of his final ranking (the data where downloaded from http://sportbuzzbusiness.fr/) > bike=read.csv( + "http://freakonometrics.free.fr/tourdefrance.csv", […]
Read more...
Some writings worth reading, discovered here and there, “Hacking Online Polls and Other Ways British Spies Seek to Control the Internet” https://firstlook.org/theintercept/… “Palestinian & Israeli deaths” https://i.imgur.com/7cdU0R6  see “Total of 8,856 rockets, 4,845 Palestinian casualties and 174 Israeli casualties” http://haaretz.co.il/news/politics/1.2373486 … see “It’s Economics, Stupid!” http://jewishpolicycenter.org/971/… via http://theatlantic.com/politics/archive/… “Is It Better to Rent or Buy?” http://nytimes.com/2014/… […]
Read more...
In 1940, Wassily Hoeffding published Masstabinvariante Korrelationstheorie, which was an impressive paper. For those (like me) who unfortunately barely speak German, an English translation could be found in The Collected Works of Wassily Hoeffding, published a few years ago. As I keep saying in my courses about copulas, almost everything was in that paper, by Wassily Hoeffding. […]
Read more...
When discussing transformations in regression models, I usually briefly introduce the Box-Cox transform (see e.g. an old post on that topic) and I also mention local regressions and nonparametric estimators (see e.g. another post). But while I was working on my ACT6420 course (on predictive modeling, which is a VEE for the SOA), I read something […]
Read more...
Dans le dernier cours de modèle de prévision, la semaine passée, nous avions passé un peu de temps sur l’étude des points aberrants et des points influents. Tout est expliqué dans les slides (avec les codes) donc je ne reviendrais pas dessus. Je pourrais juste évoquer quelques lignes de codes utilisées pour voir l’impact d’une […]
Read more...
Un rapide billet pour mettre en ligne les codes utilisés la semaine passée, complétant les codes des transparents. On travaille toujours sur la même base, ou on cherche à prévoir une distance de freinage d’un véhicule, tenant compte de la vitesse du véhicule. > plot(cars) > reg=lm(dist~speed,data=cars) > summary(reg) Call: lm(formula = dist ~ speed, […]
Read more...
Depuis plusieurs mois, on observe un engouement (probablement légitime) pour le big data. Si beaucoup peut être fait pour utiliser les volumes énormes de données à la disposition des assureurs, il convient de garder en mémoire que dans beaucoup de cas, les données sont rares et que la technologie ne devrait pas pouvoir changer grand […]
Read more...
In almost three weeks, the (FIFA) World Cup will start, in Brazil. I have to admit that I am not a big fan of soccer, so I will not talk to much about it. Actually, I wanted to talk about colors, and variations on some colors. For instance, there are a lot of blues. In […]
Read more...
Some writings worth reading “Four more reasons to be skeptical of open-access publishing” http://sciencedirect.com/scien… too bad there is a paywall, I can’t read it ! (mentioned in “Be sensible about open access, but it’s still a good thing!” http://katatrepsis.com/2014/04/15/be…) “Give us back our statistical data” http://washingtonpost.com/opinions/robert-samuelson… “10 Reasons You Will Read This Medium Post” https://medium.com/editors-picks […]
Read more...
J’animerai une formation lundi 28 de 14:00 à 16:00 au local N-6320 de l’UQAM sur le thème introduction aux arbres de classification. Cette formation est organisée dans le cadre des séminaires en méthodes d’analyses quantitatives et qualitatives qui se tiennent régulièrement depuis un peu plus d’un mois. animé par le collectif pour le développement et les applications en […]
Read more...
Ethan Siegel wrote a post entitled The Math of the Fastest Human Alive five years ago, using regressions. An alternative is too use extreme value models (I wrote a post a long time ago on the maximum length of a tennis match using extreme value theory a few years ago). In 2009, John Einmahl and Sander Smeets […]
Read more...
Some writings worth reading “Kapital for the Twenty-First Century?” http://dissentmagazine.org/kapital-for… (extremely interesting review) “Out in the Open: Inside the Operating System Edward Snowden Used to Evade the NSA” http://wired.com/2014/04/tails  (see also http://theguardian.com/world/2013…) “Why UPS Trucks Don’t Turn Left” http://priceonomics.com/why-ups … “Overworked America: 12 Charts That Will Make Your Blood Boil” http://motherjones.com/politics/2… via @tomkeene see “Text […]
Read more...
Hier, sur Twitter, @JF_Godbout partageait un joli graphique relatif aux élections québécoises, avec les nombres de votes obtenus (ici en pourcentage des votes totaux) et le pourcentage de sièges que cela donne, Il faut dire qu’hier, c’était jour d’élection. Ce sont des élections à un tour, avec plusieurs partis (disons 4 grands partis si on se limite […]
Read more...
Ce billet (sous une forme proche) devrait proposé sous forme d’article dans les semaines à venir pour publication dans Variances, le journal des anciens ENSAE. En attendant, les commentaires sont ouverts, et je serais ravi d’accueillir des points de vue ! L’actuariat et les actuaires Dans un ‘conte Persan‘ inventé par Claude Bébéar, l’actuariat était […]
Read more...
Last week, we’ve been through the book, completely, one last time, before sending it back to the publisher, with some comments and remarks, before publication ! So, this is it, the book will finally appear soon ! It was schedule for this week actually, but… you know. It should appear sometime by the end of […]
Read more...
In the context of AR(1) processes, we spent some time to explain what happens when  is close to 1. if  the process is stationary, if  the process is a random walk if  the process will explode Again, random walks are extremely interesting processes, with puzzling properties. For instance, as , and the process will cross the […]
Read more...
Consider some ARCH() process, say ARCH(), where with a Gaussian (strong) white noise . > n=500 > a1=0.8 > a2=0.0 > w= 0.2 > set.seed(1) > eta=rnorm(n) > epsilon=rnorm(n) > sigma2=rep(w,n) > for(t in 3:n){ + sigma2[t]=w+a1*epsilon[t-1]^2+a2*epsilon[t-2]^2 + epsilon[t]=eta[t]*sqrt(sigma2[t]) + } > par(mfrow=c(1,1)) > plot(epsilon,type="l",ylim=c(min(epsilon)-.5,max(epsilon))) > lines(min(epsilon)-1+sqrt(sigma2),col="red") (the red line is the conditional variance process). […]
Read more...
To create a blog

To create a blog

To create your own blog, you only have to fill a registration form. Hypotheses is open to the whole academic community: researchers, lecturers, information specialists, librarians, etc. in all humanities and social sciences disciplines.
Hypotheses

Hypotheses

Hypotheses is a publication platform for academic blogs in the humanities and social sciences by the  Centre for Open Electronic Publishing.
About Academic blog

About Academic blog

It is a fast and light publishing mode that allows researchers to provide real-time updates of developments in their own research. Research blogs come in numerous forms: seminar proceedings; accounts of collective research, fieldwork, or archaeological excavations; journal blogs opening up debate to a broader community; discussions forums for research or book projects; research notes; photo blogs, etc. It enables bloggers to interact with readers through comments. It is a simple user-friendly tool that does not require any specialist IT knowledge.