Friday, February 02, 2007
How to Solve Problems
I received the following question from (presumably) a student.
How would I find an answer to this question? First of all, I would consult Sybase documentation. It truly is excellent, but I've written before about other places to look if you're still stuck. I happen to have studied this in the past, and I know explanations on the different locking models are easy to find. Hint: All page locking locks data AND indexes, data only lock only lock the data.
In this case, you'll find your answer pretty easily. But in other cases, you'll probably answer every question with "It Depends." That is a very intelligent answer (assuming of course that you understand on what it depends).
But you're not done yet. Remember that documentation and the experts can be wrong. Also, things change, and the information you have been given may be out-of-date. You need to test your conclusions after you have reached them. You'll see in my blog that's how I learn: I create test cases.
In this case, it's easy. Create many identical tables that differ only in their locking schemes. Obviously you'll want lots of data in each of them. Then you can turn on timing statistics (here's how) and see how long it takes to delete. You're less likely to have made a mistake, and you'll learn a lot more by doing than by reading or listening. After you have done so, I'd be glad to share your information here.
Now you know the secret!
"I had a doubt can you kindly clarify me, and if you can send me details it will really help me to understand better. Thanks for your time.
Delete operation will be faster in
a) All page locking
b) Data page locking
c) Data row locking"
How would I find an answer to this question? First of all, I would consult Sybase documentation. It truly is excellent, but I've written before about other places to look if you're still stuck. I happen to have studied this in the past, and I know explanations on the different locking models are easy to find. Hint: All page locking locks data AND indexes, data only lock only lock the data.
In this case, you'll find your answer pretty easily. But in other cases, you'll probably answer every question with "It Depends." That is a very intelligent answer (assuming of course that you understand on what it depends).
But you're not done yet. Remember that documentation and the experts can be wrong. Also, things change, and the information you have been given may be out-of-date. You need to test your conclusions after you have reached them. You'll see in my blog that's how I learn: I create test cases.
In this case, it's easy. Create many identical tables that differ only in their locking schemes. Obviously you'll want lots of data in each of them. Then you can turn on timing statistics (here's how) and see how long it takes to delete. You're less likely to have made a mistake, and you'll learn a lot more by doing than by reading or listening. After you have done so, I'd be glad to share your information here.
Now you know the secret!
Tuesday, May 30, 2006
Accepting Contributions
Please send any contributions to me, and I will post them here on your behalf. Check my profile for contact information.
There are limited options for Sybase professionals looking for information, and I'd be pleased to help in any way I can. Sadly I have changed jobs recently and since I don't work with Sybase as much anymore, I will be contributing very little myself.
In the mean time, visit the links on this page, enjoy some of the excellent articles in the archives, and check back for new contributions from fellow Sybase professionals.
There are limited options for Sybase professionals looking for information, and I'd be pleased to help in any way I can. Sadly I have changed jobs recently and since I don't work with Sybase as much anymore, I will be contributing very little myself.
In the mean time, visit the links on this page, enjoy some of the excellent articles in the archives, and check back for new contributions from fellow Sybase professionals.
Friday, January 27, 2006
Join Optimization
Tuning queries is a required skill for a Sybase specialist. One of the many ways to tune a query is to make sure that the very best query plan is being used. One clear advantage one plan may have over another is the join order that it uses when querying several tables. This advantage can be based on such factors as the where clauses, the number of rows in the tables, how they are stored, and the available indexes.
The Sybase query optimizer is in charge of evaluating the available join orders and choosing the best one. You know that we can determine what method it used using showplan:
set showplan on
But did you also know that we can use certain DBA tracing parameters to see the various plans it evaluated, and the estimated costs of each one? Here's how:
dbcc traceon(3604, 302, 310, 317)
Incidentally, the 3604 sends the output to the monitor, or to a specified file. That prevents it from filling up your error log (3605).
You may already be familiar with 302, but 310 and 317 are the trace commands that can tell you which join orders were considered and rejected, and what the costs were. Together they'll show you ALL the plans, so be prepared for a lot of output. Otherwise use 310 by itself.
There is a way to force the optimizer to choose a certain plan. Specifically, the join order specified by the query itself, in the "from" clause.
set forceplan [on|off]
The advantage is that it saves you the time the optimizer would have spent choosing a join order. The bad news is, that time is negligible anyway.
The disadvantage is that there is usually a reason the optimizer chooses its plan, and it can vary from execution to execution depending on the data and the variables of the query. Instead of forcing a plan you think is best, you may want to consider why the optimizer is choosing the "wrong" plan. That's where the aforementioned tracing can help.
As a quick fix, though, make sure your statistics and stored procedures are up to date! The optimizer depends on its accuracy for the best results.
update statistics table_name [index_name]
exec proc_name with recompile.
There is much more to know about join optimization than I can do justice to in this short article. If you find yourself examining join optimization as part of your query tuning, you'll definitely want to review Sybase's Performance and Tuning Guide, Chapter 38. It talks about dbcc 302, 310, 317 as well as the principles of join optimization.
http://sybooks.sybase.com/onlinebooks/group-as/asg1250e/ptallbk
I also found a really good article from Sybase entitled "Analyzing and Resolving Optimizer Problems/Symptoms":
http://www.sybase.com/detail?id=2602
The Sybase query optimizer is in charge of evaluating the available join orders and choosing the best one. You know that we can determine what method it used using showplan:
set showplan on
But did you also know that we can use certain DBA tracing parameters to see the various plans it evaluated, and the estimated costs of each one? Here's how:
dbcc traceon(3604, 302, 310, 317)
Incidentally, the 3604 sends the output to the monitor, or to a specified file. That prevents it from filling up your error log (3605).
You may already be familiar with 302, but 310 and 317 are the trace commands that can tell you which join orders were considered and rejected, and what the costs were. Together they'll show you ALL the plans, so be prepared for a lot of output. Otherwise use 310 by itself.
There is a way to force the optimizer to choose a certain plan. Specifically, the join order specified by the query itself, in the "from" clause.
set forceplan [on|off]
The advantage is that it saves you the time the optimizer would have spent choosing a join order. The bad news is, that time is negligible anyway.
The disadvantage is that there is usually a reason the optimizer chooses its plan, and it can vary from execution to execution depending on the data and the variables of the query. Instead of forcing a plan you think is best, you may want to consider why the optimizer is choosing the "wrong" plan. That's where the aforementioned tracing can help.
As a quick fix, though, make sure your statistics and stored procedures are up to date! The optimizer depends on its accuracy for the best results.
update statistics table_name [index_name]
exec proc_name with recompile.
There is much more to know about join optimization than I can do justice to in this short article. If you find yourself examining join optimization as part of your query tuning, you'll definitely want to review Sybase's Performance and Tuning Guide, Chapter 38. It talks about dbcc 302, 310, 317 as well as the principles of join optimization.
http://sybooks.sybase.com/onlinebooks/group-as/asg1250e/ptallbk
I also found a really good article from Sybase entitled "Analyzing and Resolving Optimizer Problems/Symptoms":
http://www.sybase.com/detail?id=2602
Wednesday, December 14, 2005
Sybase and Perl
Question: I've got a Perl script that collects information every night that I'd like to store in the database. What is the simplest way to do that?
Answer: Well that depends which is easiest for you, there are many ways.
The most pedestrian way using Perl:
1. Output your SQL to a file.
2. Use "system" (or back-ticks) to run a command on the shell, and within:
3. Run "isql" to connect to the database, with the file you created as the input file.
If you wanted to be more interactive, you could re-direct the output a file, and read it in, and parse the results.
But there is no reason to do it that way.
Instead, you can use the Perl DBI module. In its simplest form, here is what you could do:
use DBI;
$dbh->connect('DBI:Sybase:server=servername;database=dbname', 'user', 'password');
$dbh->do($sql);
$dbh->disconnect();
To be more interactive, you'd use $dbh->fetchrow. Here is more on the Sybase DBI from Perl: http://www.perl.com/pub/a/1999/10/DBI.html
There is one more option I'd like to highlight, SybPerl. Specifically, the CTlib module. In its simplest form, our example now looks like this:
use Sybase::CTLib;
$dbh = new Sybase::CTlib 'user', 'password', 'SERVER';
$dbh->ct_execute($sql);
Here, Not only could you use $dbh->ct_results along with $dbh->ct_fetch to be more interactive, but, better yet:
@rows = $dbh->ct_sql($sql);
Michael Peppler's page has more information on SybPerl: http://www.peppler.org/~mpeppler/
Answer: Well that depends which is easiest for you, there are many ways.
The most pedestrian way using Perl:
1. Output your SQL to a file.
2. Use "system" (or back-ticks) to run a command on the shell, and within:
3. Run "isql" to connect to the database, with the file you created as the input file.
If you wanted to be more interactive, you could re-direct the output a file, and read it in, and parse the results.
But there is no reason to do it that way.
Instead, you can use the Perl DBI module. In its simplest form, here is what you could do:
use DBI;
$dbh->connect('DBI:Sybase:server=servername;database=dbname', 'user', 'password');
$dbh->do($sql);
$dbh->disconnect();
To be more interactive, you'd use $dbh->fetchrow. Here is more on the Sybase DBI from Perl: http://www.perl.com/pub/a/1999/10/DBI.html
There is one more option I'd like to highlight, SybPerl. Specifically, the CTlib module. In its simplest form, our example now looks like this:
use Sybase::CTLib;
$dbh = new Sybase::CTlib 'user', 'password', 'SERVER';
$dbh->ct_execute($sql);
Here, Not only could you use $dbh->ct_results along with $dbh->ct_fetch to be more interactive, but, better yet:
@rows = $dbh->ct_sql($sql);
Michael Peppler's page has more information on SybPerl: http://www.peppler.org/~mpeppler/
Thursday, October 27, 2005
MS SQL Server vs Sybase ASE
What is the difference between MS SQL Server and Sybase ASE?
Those that have seen both are probably struck by how similar they are. Why are they so similar?
Well, MS SQL Server gets it roots from Sybase. In fact, the first version, MS SQL 4.2, was Sybase SQL Server 4.0, and was designed for OS/2.
The NT port came out a few years later. The first version of MS SQL Server that was not based on Sybase SQL Server was MS SQL 6.0. Of course, that was based on 4.2 which was Sybase SQL Server 4.0, so the differences weren't great.
Over time the two have grown a little bit apart. Sybase SQL Server eventually changed its name to Sybase ASE (Adaptive Server Enterprise). But they both still use very similar variations of SQL called "Transact-SQL" or T-SQL. The differences between the two are largely cosmetic: for example, Sybase ASE allows for longer names and MS SQL has indexed views.
MS SQL Server is sometimes referred to as the "Windows version" of Sybase ASE. But Sybase ASE is available on Windows platforms. MS SQL Server's primary advantage is that it is generally cheaper and generally slightly simpler, wherease Sybase ASE is available on more platforms and is generally more configurable.
Those that have seen both are probably struck by how similar they are. Why are they so similar?
Well, MS SQL Server gets it roots from Sybase. In fact, the first version, MS SQL 4.2, was Sybase SQL Server 4.0, and was designed for OS/2.
The NT port came out a few years later. The first version of MS SQL Server that was not based on Sybase SQL Server was MS SQL 6.0. Of course, that was based on 4.2 which was Sybase SQL Server 4.0, so the differences weren't great.
Over time the two have grown a little bit apart. Sybase SQL Server eventually changed its name to Sybase ASE (Adaptive Server Enterprise). But they both still use very similar variations of SQL called "Transact-SQL" or T-SQL. The differences between the two are largely cosmetic: for example, Sybase ASE allows for longer names and MS SQL has indexed views.
MS SQL Server is sometimes referred to as the "Windows version" of Sybase ASE. But Sybase ASE is available on Windows platforms. MS SQL Server's primary advantage is that it is generally cheaper and generally slightly simpler, wherease Sybase ASE is available on more platforms and is generally more configurable.