I am searching for an intuitive explanation of the Latent Dirichlet Algorithm (LDA). What are the seperate steps when identfying the Topics in a Document Corpus?
Additionally I found this slide online with an explanation, but I am not sure how trustworthy this is or if it is correct.
Can somebody help?

It's basically learning, with updates via Bayes's theorem, of how likely a topic is to be associated with a word and vice versa, and hence of which topics a document talks about, given the words that it contains. If you look here, the $\theta$s are to do with topic probabilities per document, the $\varphi$s with word probabilities per topic, and the $z$s to do with topic probabilities per word. The "Dirichlet" refers to a distribution of probabilities summing to 1, such as the contents of $\theta$, of $\varphi$.