I haven't used compression in Oracle so can't comment on actual performance.
No coding changes should be required at all since the compression is handled directly and internally by Oracle at the time of writing to the data block, and decompression upon reading from it, and this all happens transparently to the application.
Performance may get both affected and improved by using compression even if that sounds contradictory.
CPU utilization gets higher because of the extra "effort" required to compress/decompress data so performance may suffer some degradation.
At the same time, compression allows more rows to fit into data blocks which means the database cache is optimized by having more rows available in it without requiring additional memory allocation. More rows cached may translate into more consistent gets and less physical reads, and since I/O is usually one of the most expensive operations in the database, reducing physical reads may contribute to performance gains.
As with any new feature, there are hardly any "rules of thumb" applicable to all scenarios. Your experience may be very different from others just because of data volumes, how much your data changes throughout the day or if it's mostly read-only, etc. Only thorough testing will tell.
Please let us know of your observations as this is definitely an interesting feature to be explored.
Prod/Dev: WF Server 8008/Win 2008 - WF Client 8008/Win 2008 - Dev. Studio: 8008/Windows 7 - DBMS: Oracle 11g Rel 2
Test: Dev. Studio 8008 /Windows 7 (Local) Output:HTML, EXL2K.