We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed . Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. If this learning rule is applied to an assemble of neurons, it provides a theoretically founded method for performing principal component analysis (PCA) with spiking neurons. In addition it makes it possible to preferentially extract those principal components from incoming signals X that are related to some additional target signal . This target signal (also called relevance variable) could represent in a biological interpretation proprioception feedback, input from other sensory modalities, or top-down signals.