The 9 million records may not be the problem. Inefficient FOCUS code may be the problem, as well as inadequate indexes on the columns you are joining with, filtering on, sorting by etc.
I would go back to the original joins, turn xretrieval off (to not extract the data) and trace the generated sql.
Include this before the data extract portion of your program:
SET XRETRIEVAL=OFF
SET TRACEOFF = ALL
SET TRACEON = SQLTRANS
SET TRACEON = STMTRACE//CLIENT
SET TRACEON = SQLAGGR//CLIENT
SET TRACESTAMP = OFF
SET TRACEWRAP = 78
SET TRACEUSER = ON
-RUN
This will display the generated SQL and messages that should help in making the code more efficient.
According to
http://www.quest-pipelines.com/newsletter-v3/0702_D.htm indexes should be created for
* Primary key and foreign key columns.
* Columns which must have unique values to satisfy a business requirement.
* Columns that have column functions computed frequently ( COUNT, SUM, AVG, MIN, and MAX for example).
* Columns used to test for the existence of a value (if the value does not exist, no data pages will be accessed).
* Columns which are searched or joined over less than 5 to 10 percent of the rows when considering a non-clustering index.
* Columns which are searched or joined over less than 30 to 50 percent of the rows when considering a clustering index.
* Columns frequently used together in a WHERE clause can benefit from a composite index to avoid maintaining multiple indexes
* Columns frequently used in an ORDER BY, GROUP BY, or DISTINCT clause to avoid sorts.
This probably is good for DB2, Oracle, MS SQL Server etc.