A sample text widget
Etiam pulvinar consectetur dolor sed malesuada. Ut convallis
euismod dolor nec pretium. Nunc ut tristique massa.
Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan.
Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem,
suscipit in posuere in, interdum non magna.
|
Here is this year’s silly Christmas song – by Mary Elizabeth McNeely, 2010. Sing it to the tune of O Christmas Tree:
O Database, O Database,
How lovely are your b-trees!
Your indexes will always grow
Through data loads of many rows.
O Database, O Database,
How lovely are your b-trees!
O Database, O Database,
Your segments are most loved!
How often they give us delight
When we tune the query right.
O Database, O Database,
Your segments are most loved!
O Database, O Database,
How lovely are your trace files!
Process tracing will ever be
The way to joy and peace for me.
O Database, O Database,
How lovely are your trace files!
O Database, O Database,
The boss-man always tells me
That my RMAN backups better be
The path to full recovery.
O Database, O Database,
The boss-man always tells me.
The example upgrade #3 was a migration-style upgrade from 10.2 to 11.1.0.7. A migration is different type from an import/export upgrade. It changes a lower version database into a higher version database. If you decide you want to go back to the previous version, depending on what’s happened since you migrated, you’ll have to do take certain actions. They might be relatively simple or they could involve restoring the whole database from backup. This upgrade type is also referred to as an “in place” upgrade, as it operates on your actual current database.
The main good things about migration upgrades are that you don’t have to make room for two databases during the upgrade process, and that the process can run relatively quickly. The main not-good things about migration upgrades are that retreating back to the previous state of your database could be tricky and will take some time, and that, in my opinion, it takes more painstaking work to prepare for a migration upgrade than an import/export upgrade.
Overall, example upgrade #3 went smoothly.
Only a couple of weird things happened with this upgrade. A couple of materialized views were invalid because of some SQL that was valid in 10g but not 11g. We didn’t see the actual SQL, but we heard from the application administrator that 11g was complaining about a selected column not being uniquely identified.
Also, the STATSPACK package was invalid and wouldn’t recompile, and I ended up dropping and recreating it; Oracle tech support didn’t seem to have a better idea. If the client had cared about their STATSPACK snapshot history, I might have been in some trouble – I didn’t research if the format is the same in 10g as 11g.
Application performance was about the same as before the upgrade.
The second Oracle upgrade I’ll tell you about was another export/import upgrade between 10.2 and 11.1.0.7.
The installation, empty 11g database build, and 10g export/11g import went fine on the test tier. The application testing of the upgraded 11g test database performed by the client also went fine – the client was satisfied with the performance. The run times of the key jobs that were tested bore a strong resemblance to those of the current 10g test database. Too strong, it turned out.
The installation, empty 11g database build, and 10g export/11g import on the production tier also went fine. The client ran a few quick reports to be sure they could connect to the new database, and all seemed well.
Then things went wrong. A job that had taken one or two minutes on 10g was still running nine hours later on the upgraded database. A couple of other jobs were also running much longer than before. Other jobs were running faster than on 10g. How could the results on the production tier be so different from the test tier? We’d even imported production data onto the test tier, to increase our possibility of creating a more similar environment. Importing data isn’t perfect – a physical clone is better, but at least we had the same number of rows, same indexes, etc.
At that point, the person who did the application testing then realized that, when he did the 11g testing, he had accidentally been pointed at the test 10g database. We were now running production on an untested version. The client made the decision to wait and see if we could fix the problems and remain on 11g instead of retreating, although retreating would have been fairly painless, since we didn’t harm the 10g database in the upgrade process, and the data in the database only changed weekly. (This easy retreat is one of the better features of an export/import upgrade.)
We tried standing on the nine-hour query’s throat by trying to force it to use the same execution plan as the 10g database, by hinting it, and then trying a stored plan. Both yielded new, but still long-running, execution plans. This was a complex query, which made things harder for us, since we were on a tight timeline. We tried setting optimizer_features_enabled to 10.2.0.2 (the source version), 10.2.0.3, and 10.2.0.4 (just in case those ran better) at the database level. All three settings were great for the nine-hour query but disastrous for some other queries. We tried restoring segment statistics from the 10g database, to see if that would help. It did help some queries, but harmed others.
Time was short, so the solution we settled on was to hint the nine-hour query and a couple of other queries to use optimizer_features_enabled=10.2.0.4 (this is an instruction to the query act like it’s running on a 10.2.0.4 database); to leave the default 11g optimizer_features_enabled set at the database-wide level (this is an instruction to all queries on the database to act like they’re running on an 11.1.0.7 database unless otherwise instructed); and to leave the 11g-level segment statistics in place. This returned the queries to the 10g level of performance or better.
The client was satisfied with this, as their goal wasn’t to improve performance; it was to get onto a version of Oracle that will be supported longer than 10gR2. (It would have been interesting to dig further into this issue, but the client was ready to move on to the next project on the list.)
Since we’re encouraging clients to move off Oracle 10g, we’d like to write up three of the 11g upgrades that we’ve done, to raise the comfort level of those who are still “window shopping” Oracle 11g.
Example upgrade #1 involved an upgrade from Oracle 10.2.0.4 to 11.1.0.7. This upgrade was an export/import upgrade, meaning that we exported the data from the 10.2.0.4 database, created an empty 11.1.0.7 database, and imported the 10.2.0.4 data into the 11.1.0.7 database. There were more steps involved than that, but that’s the big-picture view of how the upgrade was done.
The export/import style of database upgrade is a good one because it leaves the source database (the current database) unchanged. This can mean a relatively uncomplicated retreat, should something go wrong with the upgrade or should the client determine that their application is not running properly on the upgraded database. The export/import method also allows a jump to a different machine and even a different operating system at the same time the database upgrade is done. This is the most popular option for our clients, who often implement a new server at the same time as they upgrade their databases. One drawback to the export/import method is that, unless the amount of application data is relatively small, it usually involves a longer outage than other upgrade options.
So, back to example upgrade #1. This client implemented a new VM for the 11g database, so we created the new empty 11g database there, then imported the data from the 10.2.0.4 database. The client performed application testing, made some application changes on the production database, and then we re-imported from production 10.2.0.4 to the new 11g database. After a couple of cycles of this, the client was ready for the production cutover.
The application performs about the same on 11g as it did on 10g.
The weird things that happened with this database upgrade were unique to this client, because they’re still using an old custom application that initially didn’t respond well to the newer Oracle database. The client discovered the issues while testing prior to the production cutover and was able to work around them.
It’s tough out there in the IT world. Some of our buddies are unemployed, and that’s hard to see happening to people we care about. And if jobs are scarce, that means that moving to another job is not only more risky than before, but also less likely.
Aside from jobs being hard to come by, there are also fewer people doing the same amount of work as before. This might mean taking a vacation is simply not possible right now. Are you in this situation?
Here are some tips for avoiding burnout, even if you can’t take time off for a vacation:
Can you take a weekend getaway? There must be some charming lake, bed and breakfast, or historical site within a few hours’ drive of your home. Or perhaps a relative you’ve been meaning to see, but haven’t. Try to keep the laptop shut all weekend, unless you have a compelling reason otherwise. (Your being a workaholic is not a compelling reason. Go have some fun. If you’ve forgotten how, make a concerted effort to relearn the concept. Even if you’re proud of being a workaholic, look at it this way: you’ll be a better workaholic if you stop for a while!)
No weekend getaway? Can you get away overnight? Look for a special at a local hotel with a scenic view. Have a nice dinner, take a long drive or attend an event, get a fru-fru concoction from a coffee store, then slip off to sleep in your hotel room – don’t set the alarm clock! In the morning, order a sumptuous breakfast in the room (no corn flakes and milk for you today, my friend) and watch a useless but amusing TV show while you eat, or order up a movie you’ve been meaning to see. Open the curtains and enjoy the scenic view. If exercising is relaxing to you, try out the equipment in the hotel gym, or hike, run, or swim around in the scenic view. Take a long, hot shower or bath. Call the desk and ask if you can arrange a late check out – maybe you can eat lunch in the room too.
Lunchtime during the week can be your friend, too – can you arrange a two-hour lunch? Do whatever you’d normally feel guilty about doing. Peruse shoes at Nordstrom. Get a manicure or massage. Look at big screen TVs at the electronics store. Go to a yoga class. Have lunch with that friend who knows just what to say when you’re feeling down. Go to the Apple store and see what their latest gadget does.
Do you work out in the evenings? Is there a spa or resort nearby? (In Dallas, we have the Gaylord Texan – it’s a real wonderland.) Perhaps you could work out there sometimes – you won’t be on vacation, but you’ll at least be in a vacation setting – you can pretend for an hour! No spa? Can you take a class at your gym, or do something different than usual? Archery? Tai chi? Rock climbing? Escape from the usual.
Good luck, my burned out IT colleagues, with your quest for a less stressful existence. May things get better soon for our industry and our nation.
Is your only database administrator going on vacation this summer? Does the thought of that make the hair on the back of your neck stand up?
Proactive managers are already planning for coverage for their DBA’s vacation days. Larger organizations with multiple DBAs are fortunate in this regard – the DBAs can cover for each other.
With just a single DBA, some organizations cross-train a non-DBA to do simple tasks such as stop/start the database, and hope that no complex problems come up during the DBA’s absence. If complex problems do come up, opening a ticket with Oracle technical support is an option, although working with tech support is most effectively done by someone familiar with the Oracle database landscape.
Other organizations arrange for a consultant to stand in for the DBA. If you have a consulting firm or a contractor with whom you are already comfortable, this could be a good option for you.
Whatever option you choose, do plan ahead for issues that could arise during a DBA’s absence and who will handle them. Please don’t wait until you’re already in trouble!
Oracle 10g is a good database product, and it’s stood many of my clients in good stead for several years now. Even though Oracle 11g has been out on the market for a while now, I’m not seeing it widely adopted yet, which I find interesting.
I see three common factors feeding into that decision: First, satisfaction with 10g – 10g databases are chugging along just fine in most shops. Second, risk aversion – why take the risk of upgrading to a newer version, especially if 10g is still running fine? Third, cost – the economic and opportunity cost of upgrading might seem too high: why pay a consultant to do the upgrade or pull a staff member away from other productive work? A less common reason I hear is that a software vendor has not yet certified their application with 11g.
But 10gR2 (Release 2) is about to enter the first phase of de-support, and 10gR1 is already well into the de-support process. It’s time to start thinking about moving onto 11g. It has some interesting new features, and running a currently supported version also will put you in good standing with Oracle Technical Support. Plus, eventually, most software vendors will not support their product if it’s running on an out-of-date database version.
Start planning your upgrade soon – if we can help, feel free to contact us.
Mistake #9: Not keeping an eye on database space allocation
What happens when you run out of space?
If you run out of space between you car and the car in front of you, the two cars will crash into each other. If you come home with a bag of groceries and find that you have run out of space in your pantry, you’ll have to dig out those old cans of soup from three years ago, consolidate those seven partially full boxes of Ziploc bags, and decide if you’ll ever actually use that box of seaweed soup mix. Running out of space is no fun. If you proactively manage the various space allotments in your life, you will have fewer unhappy surprises.
The same is true with your database. When key areas of your database or their associated external storage allocations fill, you can suffer from failed transactions, database outages or hangs, and lost logging information. Many space problems accumulate gradually and could have been prevented by early intervention.
Are you watching your database for tablespace full conditions, nonextendable segments, and nonextendable datafiles? Are you properly managing your alert log, trace files, and archived redo logs? Are you watching your file systems/drives for near-full conditions? Have you configured your datafiles not to try extend to a size larger than your file system/drives can accommodate?
Do you know how to do these things? Contact us if we can help.
Mistake #8: Not properly sizing the database memory footprint
Many things in information technology use caching to improve performance by doing less work. Oracle databases are no exception – they use in-memory caching to improve database performance. For example, they cache database records to lessen the number of times they are read from disk; they cache parsed SQL statements to avoid the expensive hard parsing process the next time the same statement is run.
Often I see servers that are dedicated for Oracle-only use, with considerable free RAM and a relatively undersized database memory footprint. Usually this happened one of two ways: a too-low default was accepted at the time the database was created, or the database was created many years ago and the memory footprint size, while reasonable back then, is much too small for the larger RAM allocations we have these days.
Less commonly, I see the database memory footprint sized larger than available free memory, causing the memory footprint, or parts of it, to be paged/swapped into and out of on-disk swap. This is completely counterproductive, as the whole idea of a cache is to do less work, not more!
A too-small or too-large memory footprint can dramatically reduce the throughput and performance of your database.
Is your database memory footprint doing the best it can do?
Mistake #7: Not paying attention to the alert log
Oracle databases each have an alert log, which is a file to which the database writes about key events. Most entries in the alert log are just informational, such as summary information about database startups and shutdowns, online redo log switches, etc.
Some entries, however, can show you that you’ve got some trouble brewing. Operations could be failing due to lack of space. The database might be reporting corrupt data blocks. Processes could be silently failing due to Oracle software bugs. Ignoring such entries could lead to poor application performance, application process failures, or database instability.
Take a look at your alert log on a regular basis, searching for signs of trouble. You can do this in an automated fashion using Oracle Enterprise Manager or custom scripts, or you can simply manually view the log.
Are you listening to what your database is telling you?
|
|