Since I know neither the schema nor the query you're running, I can only give you so much. You can look at costs to narrow in on a few spots.
Lines 30-32 indicate that you are pulling a lot of data from buyclicks and transactions. This makes up a huge portion of the cost of this query. Lines 33-40 are very similar. Looks like you are probably pulling more data than you need, and you'll see a lot of improvement if you can reduce these two spots. Fixing those two parts will make all the nested loops and hash joins that use this data a lot faster.
Looks like there are several sequential scans (Seq Scan) that you could likely improve with a quick index on the relevant columns. You seem to be filtering for similar things on ad_codes in a main query and a subquery, which probably could be refactored out. These are small optimizations compared to the first.
My email is in my profile if you want to talk about this more.
if you can filter down number of rows in these cases or maybe only return columns present in the index and/or add an index where required you will speed up. Watch out it doesn't affect insert/update too much though if you add an index. You have created fk indexes right?
Thanks for the tip. I'm not so worried about insert/update, since this is a reporting database and the only inserts are done by backgrounded processes.
Yes, we have fk indexes.
I don't see how we can filter down the number of rows, since the purpose of the outer query is to get a count of how many buyclicks there were for each offer. But I'll see if I can create a covering index that reduces the join cost.
I tried that.... http://explain.depesz.com/s/22H5
The problem is, I'm unsure how to read this. I can explain, but I can't explain analyze, since it's too slow. Can you make heads or tails of it?