- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- ManageWP.org
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link

## On Becoming an Immigrant of the Bayesian Nation

I have been working with networks now for a long time (maybe since 2008?), and after many projects and thousands of hours spent on studying and working with networks, I’ve recently started looking into subjects that make use of graph theory in an attempt to take what I know about graph theory and see if I can build some complementary skills in other areas. So when I started taking a Bayesian Networks (Graphical Probability) course, I was pleasantly surprised at how interesting and how much can be transferred over in terms of basic concepts.

## Networks are not Networks

The truth is however, the term networks in Bayesian Networks or Bayesian Artificial Intelligence is a little misleading. The Bayesian graphical view of probability and statistics has a lot less to do with graph theory and a lot more to do with traditional probability and statistics. It’s true that the initial founding of the topic uses graph theoretic concepts, and even the idea of the junction tree algorithm, Markov blanket, propagation etc. all have an element of graph theoretic concepts. But, for me, coming from the social networks world, the graph theory itself doesn’t seem related right away. Perhaps it’s my fault in assuming that a subject as large as graph theory would have components that all seem similar – the fallacy of that thinking is equivalent to believing that all mathematics should look the same. In the later, I still feel that at the level of mathematics that I’m familiar with everything kind of dilutes down to things like making something really small or really large, adding something, multiplying something, or taking something out or inserting something in.

Doesn’t all that basically describe set theory, abstract algebra and its descendants, calculus, computation (not to mention that at the transistor/multiplexer level all operations in Base 2 are just additions of ones and zeros) and a whole lot of other sub-topics of mathematical theory? Even traditional probability and statistics can dilute down to just counting things – combinatorics.

Based on that line of thinking I walked into the Bayesian Nation with an understanding that we might talk about centrality of a probability distribution, or the statistical meaning of BayesNet’s density or a specific random variable’s measure of connectedness but obviously that was not the case – that is not to say that those concepts are not used in graphical probability models – just that they are not taught at the introductory level in an academic environment. And, from the literature I’ve reviewed so far, concepts of centrality or summary network measures are not a part of the type of graph theory taught in a Bayesian Networks classes.

## Transitioning into Full Citizenry of the Non-Frequentistani People

To my delight however, I have not been disappointed. I have found that Bayesian Networks are “*in my not-so-humble opinion, [an] inexhaustible source of magic. Capable of both inflicting injury, and remedying it.*” – Dumbledore. (He said that about BayesNets, really…pinky promise!)

Truly, Bayesian Networks are an incredibly powerful tool for those who want a first-attempt at describing the high dimensional world around them. Slap on a quick Naive BayesNet and if you’re truly an expert in your field, you have basically just created a very robust tool for replicating your expertise about a given topic or problem and not only that, but that tool can now be used in inference. It’s safe to say that I immediately fell in love, especially coming from a world of agent-based modelling using simple rules, which much of the time translates to singly-connected IF/ELSE statements that in my humble opinion, lack the kind potency needed to create high fidelity models – the kind of high fidelity models that can be a boon to real policy and decision-making.

## Class Warfare and the Impending Legacy of Bayesia

The challenge I see with graphical probability models though is not with the models themselves. It’s with pedagogy. Graphical probability theory seem to have organically developed a community around it that is made up by academics and researchers who seem steadfast on making things harder than they actually are.

Perhaps they come from a culture borne around the probability and statistics of the 18 and 1900s where individual researchers, in order to justify their clearly superior and supposedly God-given talent, would essentially block others from entering certain academic domains because they were not worthy or some other BS like that (Other fields have the same exact problem. This is not strictly limited to this domain). Of course, the monopoly on traditional P&S was eventually broken, and especially today with the availability of computers etc. Now everyone knows how to do what only a limited number could only a hundred years ago.

My message to those people is simple – get off your high horses and start simplifying – you are not the only ones that can do what you do. If you continue trying to teach graphical probability models as if you are providing the answer to the universe or something, then your legacy – like the many researchers and academics who worked in your field over the last few hundred years will be moot – just like theirs.

Thomas Bayes was great not because he developed your branch of statistics – he was great because he simplified an otherwise complex natural phenomenon so that a teenager can understand it. You should attempt to do the same.

- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- ManageWP.org
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link

I love this article, Joe. I especially liked the last piece of advice you give to the Bayesian Network community. I feel the same way from my experience as well. I’m a regular now by the way.

great post!

“in my not-so-humble opinion, [an] inexhaustible source of magic. Capable of both inflicting injury, and remedying it.” – Dumbledore …..YES!!!!!!!!!!!!!!

“My message to those people is simple – get off your high horses and start simplifying – you are not the only ones that can do what you do.”

I Totally agree. I took a graphical probability course a while back and it was a disaster because the instructor was an egotistical maniac. He just ‘effin did not want to “simplify” as you put it.

@ Jon Thank you for being a regular. I really am trying to post more.

@ Mo Thanks

@ Kathy I thought it fit. lol

@ Andre It’s the terminology mostly. I’ve always been taught to simplify. When your audience starts to believe that they can do what you can do, then you’ve done your job as an instructor.