Snapshots - SQL & PL/SQL

I have a some Snapshots created based on Viewa (Complex
Snapshot). I have specified every 30 minutes as refresh time.
Sometimes there are no records in the Snapshots(I think during
refresh time).
Is there anyway to make the Snapshots have the records?
( accoring to doc, Complex Snapshots are completely refreshed)

Since this is a complex snapshot, it must be refreshed via the
complete method. The steps is executing such a refresh or
generally like this:
1) Truncate table
2) grab data from master table
The table is empty during the refresh.
Now, if you use Refresh Groups, I believe it looks more like
1) Delete all records from table
2) grab data from master table
3) commit;
I'm not sure about step 3 though.


Moving data from Materialized View to downstream application

Database Version: 9i Release 2
I've created a Materialized View on a base table, the purpose of which is to capture data changes to send to a downstream application
(I've been advizsed to avoid using Streams for now until we upgrade to 10g)
I need to either:
a) Make that data available for another process to pick up and have knowledge of what it needs to pick up versus what it has already picked up
b) Send/Update a table or MV at a downstream MS SQL Server database (resides within our firewall)
Some notes/questions
I cannot change the structure of the underlying base table
On the MV I could capture a date-time which records each time a record is touched on the base table, the problem is that many of these updates are “meaningless” and I should not send such updates downstream
1.Is there a way I can timestamp/flag the records as they get inserted/updated on the MV? (potentially used by the pickup process to identify what it needs to pick up)
2. If I flag the records e.g. “needs pickup” – would it be ok to make the MV updateable so the process can then mark the records as “picked up”. Performance?
3. Can I create a trigger on the MV to write records to a secondary table as they are inserted/updated on the MV?
4. Could I use the MV log tables MLOG$ to help me in any way?
I’m guessing there is no way to do the following but:
5. Is there a way to include a column on a Materialized View but ignore it in terms of capturing changes. e.g. I have columns a,b,c,d,e on my base table, I want to include a,b,c in my Materialized View but I only want my MV to be updated if a or b is updated on base table. If only column c is updated do not send any update to MV
Many thanks for any insight 
Can you not allow your downstream application to access the MV directly (or via a view) from Oracle. 
Actually I can do that. I still have the problem though that when the downstream application pulls the records from the MV how does it know what is new and updated versus what it has pulled already?
The idea is that the downstream app will need to pull data from this MV every minute or two and grab the delta. Without somehow timestamping the records on the MV I am not sure how the downstream app can recognize that delta. Is there a way I can timestamp the records as they get inserted/updated in the MV?
Many thanks 
That is if you still want to pull the data across to update SQL Server. My proposed solution is to make the MV your main source of the data so that you do not need to worry about what is new and what is old.
However, one question is does your table have a unique ID you can use to filter your data pull? If yes, then you can select from the MV where the ID does not exist in your destination table. 
Yes I do have a unique identifier on each record in the base table and therefore also in the MV.
So now the Downstream application has to update some of its own database tables based on what is in the MV that I provide - just so I understand how would that work e.g.:
1. MV gets refreshed from my base table every minute
2. Some records are newly inserted and some simply updated
and this is the part I don't understand (I might be missing something obvious)....
3. Downstream application interrogates the MV every (say 1-2 minutes) and updates its own tables based on the data there.....but let's say MV contains 10,000 records and in the last minute 100 of those records have been updated and another 100 inserted - how will it know the 200 records it needs to pull?
Thanks again 
Looks like the following would work perfectly for me in 10g
select * from my_mv where ora_rowscn >=timestamp_to_scn(sysdate-2/1440);
Is there a 9i equivalent?

Help with Trigger on Materialized View Base table

I have database 1, and database 2.
In database 1, I have a table, and it also has a log on it, for the mview.
In database 2, I have the Mviews base table, and the mview that sits on it and refreshes its data, using the log from database 1.
Now, I need a trigger on the base table in database 2, that will select a few columns from the newly refreshed data, and insert them into a 3rd table.
So every time the mview is refreshed, I need to save a few columns from each record that was refreshed, and put them in another table.
Pretty much everywhere I have looked for help with this, all I get is:
"If you create a trigger on a base table of a materialized view, then you must ensure that the trigger does not fire during a refresh of the materialized view. During refresh, the DBMS_MVIEW procedure I_AM_A_REFRESH returns TRUE"
So, if the trigger can not be active during the actual refresh, is it possible for me to still capture some columns from the newly refreshed records, immediately after the refresh completes?
Although the create trigger documentation does use the word "must", you are quite free to let the trigger fire during a refresh of the materialized view. Generally, you don't want the trigger to fire during a materialized view refresh-- in a multi-master replication environment, for example, you can easily end up with a situation where a change made in A gets replicated to B, the change fired by the trigger in B gets replicated back to A, the change made by the trigger in A gets replicated back to B, in an infinite loop that quickly brings both systems to their knees. But there is nothing that prevents you from letting it run assuming you are confident that you're not going to create any problems for yourself.
You do need to be aware, however, that there is no guarantee that your trigger will fire only for new rows. For example, it is very likely that at some point in the future, you'll need to do a full refresh of the materialized view in which case all the rows will be deleted and re-inserted (thus causing your trigger to fire for every row). And if your materialized view does anything other than a simple "SELECT * FROM table#db_link", it is possible that an incremental refresh will update a row that actually didn't change because Oracle couldn't guarantee that the row would be unchanged.

How to refresh a Materialised view(MV)?

Hi Forum,
I am in a plan to develop MV in my database where I can point my user applications to run some reports.These MV is located completely on a different server to illustrate in more detailed way A and B are my production databases server and C is the one I want to place the MV which is populated by data in A and B.
Until now we are accessing the data from the production database to run these reports. So, instead of Data warehouse I want to suggest my company to write some materialised views.
The key criteria is the data which is update in the production DB needs to be updated in the materialised view instantly. Is it some thing possible in the materialised views. I gone through some articles which tells how to refresh the data in MV, which says if we use ON COMMIT it will refresh the data in the MV.
But, my question is does every time the MV refreshes does it create the MV as a new one or only the updated data will be populated in the MV?
Thanks for your time.
MV has different refresh types, from on commit, fast/incremental to full refreshes.
There are however various restrictions that determines what type of refresh is possible. If delta changes are to be replicated to the MV, then Oracle needs to know what the changes are to the table - thus a MV log is needed on the source table. Etc.
I suggest that you consult the [url]Oracle® Database Advanced Replication guide.
> my question is does every time the MV refreshes does it create the MV as a
new one or only the updated data will be populated in the MV?
No. A MV is only created once. The refresh process will use transaction processing to update the MV - it does not re-created it, it will delete/update/insert rows in the MV. 
Hi Billy,
One think to ask, that if we refresh mat view in a REF group using
dbms_refresh.refresh pacak, oracle does delete and then inserts new records,
On the other hand if we refresh stand along snapshot using
dbms_refresh.snapshot package, oracle truncates and then inserts new records,
like wise dbms_refresh.snapshot is much quicker then dbms_refresh.refresh ?
is this the case ?
what does query_re_write option is used for ?? 
Ah... yes. Correction. My statement that MVs are updated/deleted/insert is only half correct - in other words it is half wrong.
When using refresh groups this statement is correct. Not using refresh groups (indvidual refreshes) performs a truncate.
I use the refresh groups myself.. but even so, I'm not a great fan om MVs. I personally think this feature has yet to mature into a truly robust feature. In some cases, I create my own "refreshed views" for aggregation of data (I have written a generic PL/SQL package that caters for this). It is significantly faster than MVs and a lot more scalable - but then also very specific to dealing with partitioned tables only.
But as you are looking for at an on commit approach, I suggest looking at the Replication guide. 
Can you post the package, ?
what logic you use in the package ?
most of the time is use rowid to find if mat views are refreshed or not, since every time you have new data there is rowid change,
your link is useful 
> Can you post the package, ?
Unfortunately not - my contract state that code I write belongs to the company that pays for the code.
> what logic you use in the package
Well, it is very specific to a system that I've written that needs to perform aggregation of raw data into hourly data, an hourly data into daily data, daily into monthly, etc. As data sets are based on a sliding window and aged out, a MV is not really possible on those data sets as data retention at the aggregated levels have bigger sliding windows than at the finer grain levels.
Due to the data volumes involved (about 190 million raw rows/day), it is also not possible to do a delta update of the aggregated data. So I had to come up with something that has very few moving parts, Oracle-wise.
A CTAS (Create Table As Select) to create the "MV" full update. Then this table is partitioned exchanged with the partition table (serving as the "MV"), where each partition covers a specific date range (e.g. an hourly summary, daily summary, etc.). The "staging" table dynamically created for the duration of this operation has to physically match the partitioned table column for column and index for index, in order to perform a fast (and correct) partition exchange.
> most of the time is use rowid to find if mat views are refreshed or not,
since every time you have new data there is rowid change
Which is why MV logs are useful - they record the actual changes, allowing a MV to be updated with only the delta differences. But there are overheads ito triggers and additional tables that are created for MV logs.
One needs to have an accurate problem description in order to decide just what type of MV feature is the better one to use. 
thanks billy, for the link and for the explanation,
the link is very useful..

Please suggest approach to get data over a DB Link

Hello Gurus,
We have a requirement where in we have to fetch data from Database1(say) into Database2 (say) on periodical basis. i.e.
Database1 (Source) -------------------------> Database2( Destination)
But the problem we have is Database1 tables (we are fetching) don't have insert_date, update_date (like) columns in it. Hence we don't have any way to figure out which rows are inserted on which date. e.g. on day1 DB1 has 10 rows then our program should fetch those 10 rows on day1, on day 2 DB1 received 15 more rows then our program should fetch only those 15 rows on day 2 and so on...
For satisfying this requirement we will have to compare both DB Tables each time to find out the difference, which we feel is a performance overhead as the comparison will take place over the DBLink, also gradually Data will increase which will hamper the performance.
Could you please suggest any best approach to satisfy this requirement.?
Database1(source) is Oracle9i and Database2(Destination) is Oracle 10g.
Thanks in advance! 
Create a MATERIALIZED VIEW LOG on the base table in Database 1
Create a MATERIALIZED VIEW in Database 2, with defined periodic FAST refresh.
Hemant's suggestion is good, but on a practical level you have identified a business requirement that you need to know when the data was inserted or updated.
Thus you need a column to capture the insert/update timestamp. Usually these are populated by before update or insert triggers. 
yes thats true!
But, the other DB from which we are fetching data is NOT owned by us,its third party DB. hence there is no chance of modifying its tables to have insert/update date columns.
Hence we have to decide an approach without checking if we can modify Database1 (in this case) 
You will need to be able to "modify" databaase 1 in that you have to create a Materialized View Log on the source table.
If you do create the MV Log, you can do a FAST Refresh, and you do not need to add identifying "date" columns because the FAST Refresh can use the PrimaryKey (or build it USING ROWID if there is no PK) to identify new/updated rows and pull only those affect rows.
If you can't create the MV Log, you'd have to do a COMPLETE Refresh each time. That can take time to execute, depending on the size of the table. 
Can you configure Streams on the source database? That doesn't require modifying the tables or creating any objects in the source application's schemas, but it does require some database configuration on both ends. That would allow you to subscribe to changes to the data in the remote database, which gives you at least an approximate timestamp for the change (you should be able to convert the SCN of the change to a timestamp +/- a few seconds using the SCN_TO_TIMESTAMP function). Of course, Streams between a 9i source and a 10g destination is a fair amount of work to set up.
Barring that, can you get the archived logs from the source database and run LogMiner on a separate database (possibly even the target database)? That's less elegant than Streams, but it would allow you to extract the changes. Of course, it would require that you all the redo generated by the source database, which could well be more data than a complete refresh of a materialized view would bring over.
1)Can we create the MV Log in destination DB?
2)Can we pull only the updated rows in the MV?
i.e. if on day 1 there are 100 rows in DB1(source) tables, hence on day 1 DB2(destinition) MV will have 100 rows.
on day 2 if DB1 receives 50 rows then is it possible that on day2 DB2 MV will have ONLY those newly added 50 rows?
Please guide. 
The materialized view log must exist on the source database.
If you create a materialized view log on the source database, then the materialized view on the destination database could be refreshed incrementally. The materialized view will have all the rows that exist in the table on the source database, but only the changed rows would be copied over.
Thanks Justin!
Few more!
The materialized view will have all the rows that exist in the table on the source database, but only the changed rows would be copied over.I am planning to have a 'create_date' column in the DB2 MV which will default to system date. And as ONLY CHANGED rows would be copied over; then does that mean 'on refresh ONLY those changed rows will have the 'create_date' as sysdate.?' and rest will have their earlier dates?
I am also going through docs, but your help here will direct me in my design.
Thanks in advance! 
Assuming that the materialized view is fast refreshable (i.e. there is a materialized view log on the base table in DB1 and the materialized view query meets the criteria for a fast-refreshable materialized view), and assuming that you are assigning the CREATE_DATE in a trigger on the materialized view, and assuming that the materialized view refresh is actually incremental, then yes, only changed rows would have a CREATE_DATE of SYSDATE. Of course, that date would be the date & time of the refresh, not the actual date & time that the row was created on the source system.

Materizlized View refresh and Table Truncate

Hi All
I have a table T1 which is truncated and reloaded every 10 minutes with 1000s of records
I have a view V1 based on this table
There is an application which reads from the view and displays data continuously to the users.
When the truncate takes place and the data is reloaded, the application shows blank data ( for about 5 minutes)
How do I avoid this situation
Will creating MATERIALIZED VIEWS solve this problem ?
I will refresh the MVs # every 10 minutes
Will it give the same problem as my normal VIEW.
Is there any other solution in this case
the application shows blank data ( for about 5 minutes)why are you loading rows into the table every 10 minutes (into the empty table, taht is)?
Are you saying that your truncate + load cycle takes about five minutes?
What does this process looks like?
What is the business requirement to do this load every 10 minutes?
i assume that ur truncate + load cycle takes 5 mins to finish. in this case if the user queries he sees a blank set.
a materialized view will only be useful in case it is a fast refresh type and not complete/full refresh. u can try this as well as a materialized view holds actual data (unlike normal views) as inside a table.
hope this is useful! 
The Truncate and load takes about 5 minutes
The users want to see real time data
The files from transactional database are loaded into this table every 10 minutes.
Have you considered some sort of asynchronous notification system for the front-end? If users want real-time data, you could use an appropriate messaging system (i.e. JMS for a J2EE application) to get data as it comes into the database and the application can manage aging out old data.
Distributed Database Consulting, Inc.
I think what you need is two tables not one table and a view. Create two tables 'a1' and 'a2' and a synonym 'a'.
All the user queries should refer to the synonym 'a'.
Once the initial loading is complete (say to a1) create the
synonym pointing to a1. You can start loading data to a2 five minutes later. after another five minutes the loading to a2 will be complete. Drop the synonym when the loading to a2 is complete and re-create it pointing to a2. Repeat the process after five minutes - this time loading data to a1
and switch the synonyms to a1. 
We actually used this approach at KPN in Holland and it all works fine except for one small side-effect which was not a major issue as we only "switched" tables daily - stored objects that reference the synonym are invalidated and a recompilation is necessary, this happens automatically but is worth noting
If the count of records is in the thousands and not the millions then use
delete from my_table;
The users would see the previous load until the commit is done. Then the would see the new data on the next refresh, but at no time would they see a blank table.
Vincent, I am using you idea currently.
Bill, I think your idea would work , only thing is use Informatica to load the data so I cannot use a stored procedure
Ashwin N.