Simply put, Hadoop is a distributed platform for manipulating large data sets. It has fault-tolerance built in which makes it appealing to organizations where downtime can impact business processes. Cognos is a business intelligence tool that allows users to explore and report on data. So there appears to be a logical fit.
Hadoop, however, does not lend itself (yet) to ad-hoc querying as the other poster has commented. There is a Hadoop project that promises just that - Hive. Developers have released ODBC connectors to access Hive databases (which is simply a data warehouse view of your Hadoop data and can be queried using an SQL-like language called HiveQL). Since Cognos can extract data from an ODBC database, it stands to reason that Cognos can extract data from Hadoop through Hive.
The other approach to using Hadoop in your Cognos environment is to transfer data using text files such as CSV. Hadoop can generate a data file that can then be imported into Cognos. This is the approach I currently use.
Yet, I have not answered the "why" of using Hadoop. The two applications I have used Hadoop on are inventory forecasting and cash flow/budgeting. If you are trying to perform routine forecasts of hundreds of thousands of SKU's, Hadoop is a wonderful tool. If you are trying to perform a Monte Carlo simulation over a thousand budget items, Hadoop is wonderful. Just import data from your data warehouse, run your Hadoop jobs, and import the resulting CSV files into Cognos. Voila!
Take care though, Hadoop is not a panacea. Sometimes old fashion SQL and your programming language of choice are just as good - or better. Hadoop comes with a learning curve and resource demands. I learned by downloading the Hortonworks sandbox; it is a preconfigured virtual machine that runs in VMware, VirtualBox, etc. So you do not have to install or configure anything!