Oracle creates a memory area, known as the context area, for processing an SQL statement, which contains all the information needed for processing the statement; for example, the number of rows processed, etc
A cursor is a pointer to this context area. PL/SQL controls the context area through a cursor. A cursor holds the rows (one or more) returned by a SQL statement.
There are two types of cursors −
In PL/SQL, you can refer to the most recent implicit cursor as the SQL cursor, which always has attributes such as %FOUND, %ISOPEN, %NOTFOUND, and %ROWCOUNT. The SQL cursor has additional attributes, %BULK_ROWCOUNT and %BULK_EXCEPTIONS
DECLARE total_rows number(2); BEGIN UPDATE ukaccount.ukcourse SET name ='bangladesh' WHERE ID =1; IF sql%notfound THEN dbms_output.put_line('no ukcourse selected'); ELSIF sql%found THEN total_rows := sql%rowcount; dbms_output.put_line( total_rows || ' ukcourse selected '); END IF; COMMIT; END; /
--Declaring the Cursor CURSOR c_customers IS SELECT id, name, address FROM customers; --Opening the Cursor OPEN c_customers; --Fetching the Cursor FETCH c_customers INTO c_id, c_name, c_addr; -- Closing the Cursor CLOSE c_customers;
explicit cusror example:-- done DECLARE CURSOR cur_ukcourse is SELECT id, name FROM ukaccount.ukcourse; c_id ukaccount.ukcourse.id%type; c_name ukaccount.ukcourse.name%type; BEGIN OPEN cur_ukcourse; LOOP FETCH cur_ukcourse into c_id, c_name; EXIT WHEN cur_ukcourse%notfound; dbms_output.put_line(c_id || ' ' || c_name); END LOOP; CLOSE cur_ukcourse; END; /
A record is a data structure that can hold data items of different kinds. Records consist of different fields, similar to a row of a database table.
PL/SQL can handle the following types of records −
The %ROWTYPE attribute enables a programmer to create table-based and cursorbased records.
The following example illustrates the concept of table-based records. We will be using the CUSTOMERS table we had created and used in the previous chapters −
DECLARE course_table_recrord ukaccount.ukcourse%rowtype; BEGIN SELECT * into course_table_recrord FROM ukaccount.ukcourse WHERE id = 2; dbms_output.put_line('Course ID: ' || course_table_recrord.id); dbms_output.put_line('Course Name: ' || course_table_recrord.name); END; /
The following example illustrates the concept of cursor-based records. We will be using the CUSTOMERS table we had created and used in the previous chapters −
DECLARE CURSOR course_cur is SELECT id, name FROM ukaccount.ukcourse; course_record course_cur%rowtype; BEGIN OPEN course_cur; LOOP FETCH course_cur into course_record; EXIT WHEN course_cur%notfound; DBMS_OUTPUT.put_line(course_record.id || ' ' || course_record.name); END LOOP; END; /
PL/SQL provides a user-defined record type that allows you to define the different record structures. These records consist of different fields. Suppose you want to keep track of your books in a library. You might want to track the following attributes about each book −
DECLARE type ukcourse is record ( id number(3), name varchar(50) ); ukcourse1 ukcourse; ukcourse2 ukcourse; BEGIN -- ukcourse1 specification ukcourse1.id := 3; ukcourse1.name := 'India'; -- ukcourse2 specification ukcourse2.id := 4; ukcourse2.name := 'Pakistan'; -- Print ukcourse1 record dbms_output.put_line('id : ' || ukcourse1.id); dbms_output.put_line('name: '|| ukcourse1.name); -- Print ukcourse2 2 record dbms_output.put_line('id : ' || ukcourse2.id); dbms_output.put_line('name: '|| ukcourse2.name); END; /
This article is an update of one written for Oracle 8i (Bulk Binds) which includes new features available in Oracle 9i Release 2 and beyond.
Bulk binds can improve the performance when loading collections from a queries. The BULK COLLECT INTO construct binds the output of the query to the collection. To test this create the following table.
CREATE TABLE ukaccount.bulk_ukcourse AS SELECT * FROM ukaccount.ukcourse;
The following code compares the time taken to populate a collection manually and using a bulk bind.
SET SERVEROUTPUT ON DECLARE TYPE t_bulk_collect_test IS TABLE OF ukaccount.bulk_ukcourse%ROWTYPE; l_tab t_bulk_collect_test := t_bulk_collect_test(); l_start NUMBER; BEGIN -- Time a regular population. l_start := DBMS_UTILITY.get_time; FOR cur_rec IN (SELECT * FROM ukaccount.bulk_ukcourse) LOOP l_tab.extend; l_tab(l_tab.last) := cur_rec; END LOOP; DBMS_OUTPUT.put_line('Regular (' || l_tab.count || ' rows): ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk population. l_start := DBMS_UTILITY.get_time; SELECT * BULK COLLECT INTO l_tab FROM ukaccount.bulk_ukcourse; DBMS_OUTPUT.put_line('Bulk (' || l_tab.count || ' rows): ' || (DBMS_UTILITY.get_time - l_start)); END; / -- output: Regular (42578 rows): 66 Bulk (42578 rows): 4
The BULK COLLECT functionality works equally well for associative arrays (index-by tables), nested tables and varrays. In the case of associative arrays, the index will always be PLS_INTEGER and populated from 1 to N based on row order.
Remember that collections are held in memory, so doing a bulk collect from a large query could cause a considerable performance problem. In actual fact you would rarely do a straight bulk collect in this manner. Instead you would limit the rows returned using the LIMIT clause and move through the data processing smaller chunks. This gives you the benefits of bulk binds, without hogging all the server memory. The following code shows how to chunk through the data in a large table.
SET SERVEROUTPUT ON DECLARE TYPE t_bulk_collect_test IS TABLE OF ukaccount.bulk_ukcourse%ROWTYPE; l_tab t_bulk_collect_test; CURSOR c_data IS SELECT * FROM ukaccount.bulk_ukcourse; BEGIN OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 10000; EXIT WHEN l_tab.count = 0; -- Process contents of collection here. DBMS_OUTPUT.put_line(l_tab.count || ' rows'); END LOOP; CLOSE c_data; END; /
So we can see that with a LIMIT 10000 we were able to break the data into chunks of 10,000 rows, reducing the memory footprint of our application, while still taking advantage of bulk binds. The array size you pick will depend on the width of the rows you are returning and the amount of memory you are happy to use.
From Oracle 10g onward, the optimizing PL/SQL compiler converts cursor FOR LOOPs into BULK COLLECTs with an array size of 100. The following example compares the speed of a regular cursor FOR LOOP with BULK COLLECTs using varying array sizes.
SET SERVEROUTPUT ON DECLARE TYPE t_bulk_collect_test IS TABLE OF ukaccount.bulk_ukcourse%ROWTYPE; l_tab t_bulk_collect_test; CURSOR c_data IS SELECT * FROM ukaccount.bulk_ukcourse; l_start NUMBER; BEGIN -- Time a regular cursor for loop. l_start := DBMS_UTILITY.get_time; FOR cur_rec IN (SELECT * FROM ukaccount.bulk_ukcourse) LOOP NULL; END LOOP; DBMS_OUTPUT.put_line('Regular : ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk with LIMIT 10. l_start := DBMS_UTILITY.get_time; OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 10; EXIT WHEN l_tab.count = 0; END LOOP; CLOSE c_data; DBMS_OUTPUT.put_line('LIMIT 10 : ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk with LIMIT 100. l_start := DBMS_UTILITY.get_time; OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 100; EXIT WHEN l_tab.count = 0; END LOOP; CLOSE c_data; DBMS_OUTPUT.put_line('LIMIT 100: ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk with LIMIT 1000. l_start := DBMS_UTILITY.get_time; OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 1000; EXIT WHEN l_tab.count = 0; END LOOP; CLOSE c_data; DBMS_OUTPUT.put_line('LIMIT 1000: ' || (DBMS_UTILITY.get_time - l_start)); END; / --output: Regular : 18 LIMIT 10 : 80 LIMIT 100: 15 LIMIT 1000: 10
You can see from this example the performance of a regular FOR LOOP is comparable to a BULK COLLECT using an array size of 100. Does this mean you can forget about BULK COLLECT in 10g onward? In my opinion no. I think it makes sense to have control of the array size. If you have very small rows, you might want to increase the array size substantially. If you have very wide rows, 100 may be too large an array size.
Total : 27273
Today :9
Today Visit Country :