A recent study on the topic of additivity addresses the task of search result diversification and concludes that while weaker baselines are usually significantly improved by the evaluated diversification methods, for stronger baselines, no significant improvement can be observed. Due to the importance of the issue in shaping future research evaluation strategies in search results diversification, in this thesis, it is first aimed to reproduce the findings reported in the previous study, and then investigate its possible limitations. Our extensive experiments first reveal that under the same experimental setting with that previous study, similar results can be obtained. Next, we hypothesize that for stronger baselines, tuning the parameters of some methods should be done in a more fine-grained manner. With trade-off parameters that are specifically determined for each baseline run, it is shown that the percentage of significant improvements even over the strong baselines can be doubled. As a further issue, the possible impact of using the same strong baseline retrieval function for the diversity computations of the methods is discussed. Finally, the effect of another parameter in search result diversification i.e. candidate set size is analyzed and we show that using adaptive candidate set size on a query basis instead of a fixed value across all queries, performances of result diversification methods on strong baselines can be further improved. In conclusion, in the case of a strong baseline, it is more crucial to tune the parameters of the diversification methods to be evaluated; but once this is done, additivity is achievable.