Steven Feuerstein talks about using Bulk collect with the limit keyword to only fetch a certain number of rows at a time.
If you know the average rowsize you can approximate the number of rows that it would take to have 500MB of data.
Then you can set that number to your limit and use utl_file to write each to a different file. For example you can have a variable
that you increment every time you get more rows and use that to build the file_name: I.e. my_file_01.csv. When you
open the file you would want to left pad the increment variable with zeroes so that the files sort the way you want.
this space intentionally left blank