Adapted from a response to a question posted on Oracle-l, here are three methods for finding out what partition or subpartition a table row is stored in, or which partition or subpartition a row will be stored in.
Logical InferenceFor range-based or list-based partitioning or for range-list composite partitioning it is often feasible to infer the partition or subpartition name from the values of the partition and subpartition key columns, as long as a sensible partition naming convention has been chosen.
For example, for a range partitioned table defined in part by ...
Create table T (col1 date ...)
Partition By Range (col1)
(
Partition Y2006_M01 values less than (date '2006-02-01'),
Partition Y2006_M02 values less than (date '2006-03-01'),
Partition Y2006_M03 values less than (date '2006-04-01'),
...
)
...we can use a SQL function to provide the partition name:
To_Char(col1,'"Y"YYYY"_M"MM')
Note the use of double-quotes in the above formula to identify literal strings in the date format.
This is a fast and simple method, although an historically varying granularity on the partitions or a complex partitioning scheme could make it difficult to maintain.
DBMS_MView.PMarkerIn cases where it is difficult or impossible to do perform the logical inference (hash partitioning, for example, or with complex multicolumn range/list partitioning) there are a couple of other techniques that you can use.
In the DBMS_MVIEW package there is a PMARKER function that returns the data_object_id for the object in which the row resides, and you can join to user/dba/all_objects using that.
Here's a funky example script using variable numbers of hash subpartitions ...
drop table t
/
create table t (col1 , col2 , col3 )
partition by range (col1)
subpartition by hash (col2)
(
partition p1 values less than (2) subpartitions 2,
partition p2 values less than (3) subpartitions 4,
partition p3 values less than (4) subpartitions 8
)
as
select mod(rownum,3)+1,
floor(dbms_random.value(1,256)),
floor(dbms_random.value(1,10))
from all_objects
where rownum < 101
/
with obj as
(select --+ materialize
data_object_id,
subobject_name
from user_objects
where object_name = 'T' and
object_type = 'TABLE SUBPARTITION')
select subobject_name,
col3
from T,
obj
where data_object_id = DBMS_MView.PMarker(t.rowid)
order by 1
/
My unsubstantiated guess is that this uses the file#/block# of the rowid to perform a look-up on the extent that contains the row. It therefore also ought to be possible to "hand-roll" a similar method based on extracting the file# and block# from the rowid's and joining to the dba/user/all_extents view (or a materialized subset of it, for performance reasons) to get the segment properties. I don't know if I'd care to go that route myself.
TBL$OR$IDX$PART$NUM()The TBL$OR$IDX$PART$NUM() function gives you the appropriate partition number for a table value or set of values. It's an undocumented (except through metalink, if that counts) function with a couple of magic numbers in it, but the general format to use is ...
TBL$OR$IDX$PART$NUM("PARTITIONED_TABLE_NAME", 0, d#, p#, "COLUMN_NAME")
The easiest way to get the appropriate format and magic numbers to use for this function is to run a trace on a "with validation" partition exchange against the table of interest, and you'll pretty much get the complete SQL that you need.
So you might end up with something similar to ...
with utp as
(select --+ materialize
partition_position,
partition_name
from user_tab_partitions
where table_name = 'MY_TABLE')
select utp.partition_name,
last_name,
first_name
from my_table,
utp
where utp.partition_position = TBL$OR$IDX$PART$NUM("MY_TABLE", 0, 0,
65535, "PART_COL")
/
This method has the advantage that you can just supply it with arbitrary values for the partition key column(s) and you will get the appropriate partition number/name. Therefore you can use it to answer the question "Which partitions will these rows go into?". If you use partition exchanges "without validation" then you can also use it to run periodic checks on whether your ETL process has been putting rows in the wrong partitions.
The disadvantage is that it is undocumented and has those pesky magic numbers, or course.
SummaryMy preference is to use the method of logical inference, if possible.
The other two are evidently based on very different methodologies -- one being based on the physical location of the row and the other on values of the partition key column(s), so they can be applied to different situations. The TBL$OR$IDX$PART$NUM method is rather more amenable to performance tuning by pre-aggregating the table data based on distinct partition keys prior to applying the function call.