- 22 Aug, 2007 1 commit
-
-
malff/marcsql@weblab.(none) authored
This is a performance bug, related to the parsing or 'OR' and 'AND' boolean expressions. Let N be the number of expressions involved in a OR (respectively AND). When N=1 For example, "select 1" involve only 1 term: there is no OR operator. In 4.0 and 4.1, parsing expressions not involving OR had no overhead. In 5.0, parsing adds some overhead, with Select->expr_list. With this patch, the overhead introduced in 5.0 has been removed, so that performances for N=1 should be identical to the 4.0 performances, which are optimal (there is no code executed at all) The overhead in 5.0 was in fact affecting significantly some operations. For example, loading 1 Million rows into a table with INSERTs, for a table that has 100 columns, leads to parsing 100 Millions of expressions, which means that the overhead related to Select->expr_list is executed 100 Million times ... Considering that N=1 is by far the most probable expression, this case should be optimal. When N=2 For example, "select a OR b" involves 2 terms in the OR operator. In 4.0 and 4.1, parsing expressions involving 2 terms created 1 Item_cond_or node, which is the expected result. In 5.0, parsing these expression also produced 1 node, but with some extra overhead related to Select->expr_list : creating 1 list in Select->expr_list and another in Item_cond::list is inefficient. With this patch, the overhead introduced in 5.0 has been removed so that performances for N=2 should be identical to the 4.0 performances. Note that the memory allocation uses the new (thd->mem_root) syntax directly. The cost of "is_cond_or" is estimated to be neglectable: the real problem of the performance degradation comes from unneeded memory allocations. When N>=3 For example, "select a OR b OR c ...", which involves 3 or more terms. In 4.0 and 4.1, the parser had no significant cost overhead, but produced an Item tree which is difficult to evaluate / optimize during runtime. In 5.0, the parser produces a better Item tree, using the Item_cond constructor that accepts a list of children directly, but at an extra cost related to Select->expr_list. With this patch, the code is implemented to take the best of the two implementations: - there is no overhead with Select->expr_list - the Item tree generated is optimized and flattened. This is achieved by adding children nodes into the Item tree directly, with Item_cond::add(), which avoids the need for temporary lists and memory allocation Note that this patch also provide an extra optimization, that the previous code in 5.0 did not provide: expressions are flattened in the Item tree, based on what the expression already parsed is, and not based on the order in which rules are reduced. For example : "(a OR b) OR c", "a OR (b OR c)" would both be represented with 2 Item_cond_or nodes before this patch, and with 1 node only with this patch. The logic used is based on the mathematical properties of the OR operator (it's associative), and produces a simpler tree.
-
- 06 Aug, 2007 2 commits
-
-
kostja@bodhi.(none) authored
into bodhi.(none):/opt/local/work/mysql-5.0-runtime
-
kostja@bodhi.(none) authored
VIEW". mysql_list_fields() C API function would incorrectly set MYSQL_FIELD::decimals member for some view columns. The problem was in an incomplete implementation of Item_ident_for_show::make_field(), which is responsible for view columns metadata.
-
- 05 Aug, 2007 1 commit
-
-
dlenev@mockturtle.local authored
statement being KILLed". When statement which was trying to obtain write lock on then table and which was blocked by existing read lock was killed, concurrent statements that were trying to obtain read locks on the same table and that were blocked by the presence of this pending write lock were not woken up and had to wait until this first read lock goes away. This problem was caused by the fact that we forgot to wake up threads which pending requests could have been satisfied after removing lock request for the killed thread. The patch solves the problem by waking up those threads in such situation. Test for this bug will be added to 5.1 only as it has much better facilities for its implementation. Particularly, by using I_S.PROCESSLIST and wait_condition.inc script we can wait until thread will be blocked on certain table lock without relying on unconditional sleep (which usage increases time needed for test runs and might cause spurious test failures on slower platforms).
-
- 01 Aug, 2007 2 commits
-
-
kostja@bodhi.(none) authored
error 1159 ' ' (fixed by the patch for Bug 25679)
-
kostja@bodhi.(none) authored
-
- 31 Jul, 2007 1 commit
-
-
kostja@bodhi.(none) authored
into bodhi.(none):/opt/local/work/mysql-5.0-runtime
-
- 30 Jul, 2007 1 commit
-
-
kostja@bodhi.(none) authored
-
- 29 Jul, 2007 1 commit
-
-
thek@adventure.(none) authored
- Removed unused variable.
-
- 27 Jul, 2007 5 commits
-
-
thek@adventure.(none) authored
into adventure.(none):/home/thek/Development/cpp/mysql-5.0-runtime
-
thek@adventure.(none) authored
When a table was explicitly locked with LOCK TABLES no associated tables from any related trigger on the subject table were locked. As a result of this the user could experience unexpected locking behavior and statement failures similar to "failed: 1100: Table'xx' was not locked with LOCK TABLES". This patch fixes this problem by making sure triggers are pre-loaded on any statement if the subject table was explicitly locked with LOCK TABLES.
-
kostja@bodhi.(none) authored
between perm and temp tables. Review fixes. The original bug report complains that if we locked a temporary table with LOCK TABLES statement, we would not leave LOCK TABLES mode when this temporary table is dropped. Additionally, the bug was escalated when it was discovered than when a temporary transactional table that was previously locked with LOCK TABLES statement was dropped, futher actions with this table, such as UNLOCK TABLES, would lead to a crash. The problem originates from incomplete support of transactional temporary tables. When we added calls to handler::store_lock()/handler::external_lock() to operations that work with such tables, we only covered the normal server code flow and did not cover LOCK TABLES mode. In LOCK TABLES mode, ::external_lock(LOCK) would sometimes be called without matching ::external_lock(UNLOCK), e.g. when a transactional temporary table was dropped. Additionally, this table would be left in the list of LOCKed TABLES. The patch aims to address this inadequacy. Now, whenever an instance of 'handler' is destroyed, we assert that it was priorly external_lock(UNLOCK)-ed. All the places that violate this assert were fixed. This patch introduces no changes in behavior -- the discrepancy in behavior will be fixed when we start calling ::store_lock()/::external_lock() for all tables, regardless whether they are transactional or not, temporary or not.
-
svoj@june.mysql.com authored
into mysql.com:/home/svoj/devel/mysql/BUG29957/mysql-5.0-engines
-
svoj@mysql.com/june.mysql.com authored
INSERT/DELETE/UPDATE followed by ALTER TABLE within LOCK TABLES may cause table corruption on Windows. That happens because ALTER TABLE writes outdated shared state info into index file. Fixed by removing obsolete workaround. Affects MyISAM tables on Windows only.
-
- 26 Jul, 2007 2 commits
-
-
acurtis/antony@ltamd64.xiphis.org authored
into xiphis.org:/anubis/antony/work/mysql-5.0-engines.merge
-
-
- 25 Jul, 2007 6 commits
-
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.0-opt
-
"Federated Denial of Service" Federated storage engine used to attempt to open connections within the ::create() and ::open() methods which are invoked while LOCK_open mutex is being held by mysqld. As a result, no other client sessions can open tables while Federated is attempting to open a connection. Long DNS lookup times would stall mysqld's operation and a rogue connection string which connects to a remote server which simply stalls during handshake can stall mysqld for a much longer period of time. This patch moves the opening of the connection much later, when the federated actually issues queries, by which time the LOCK_open mutex is no longer being held.
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.0-opt
-
anozdrin/alik@ibm. authored
binary SHOW CREATE TABLE or SELECT FROM I_S. The problem is that mysqldump generates incorrect dump for a table with non-ASCII column name if the mysqldump's character set is ASCII. The fix is to: 1. Switch character_set_client for the mysqldump's connection to binary before issuing SHOW CREATE TABLE statement in order to avoid conversion. 2. Dump switch character_set_client statements to UTF8 and back for CREATE TABLE statement.
-
anozdrin/alik@ibm. authored
This allows 5.0 to work with 5.1 databases.
-
malff/marcsql@weblab.(none) authored
into weblab.(none):/home/marcsql/TREE/mysql-5.0-29959
-
- 24 Jul, 2007 6 commits
-
-
kostja@bodhi.(none) authored
into bodhi.(none):/opt/local/work/mysql-5.0-runtime
-
antony@pcg5ppc.xiphis.org authored
--- fix compile on Windows for bug25714.c
-
antony@pcg5ppc.xiphis.org authored
-
evgen@moonbone.local authored
When the SQL_BIG_RESULT flag is specified SELECT should store items from the select list in the filesort data and use them when sending to a client. The get_addon_fields function is responsible for creating necessary structures for that. But this function was allowed to do so only for SELECT and INSERT .. SELECT queries. This makes the SQL_BIG_RESULT useless for the CREATE .. SELECT queries. Now the get_addon_fields allows storing select list items in the filesort data for the CREATE .. SELECT queries.
-
acurtis/antony@ltamd64.xiphis.org authored
into xiphis.org:/anubis/antony/work/p2-bug25714.1.merge-5.0
-
antony@pcg5ppc.xiphis.org authored
"getGeneratedKeys() does not work with FEDERATED table" mysql_insert() expected the storage engine to update the row data during the write_row() operation with the value of the new auto- increment field. The field must be updated when only one row has been inserted as mysql_insert() would ignore the thd->last_insert. This patch implements HA_STATUS_AUTO support in ha_federated::info() and ensures that ha_federated::write_row() does update the row's auto-increment value. The test case was written in C as the protocol's 'id' value is accessible through libmysqlclient and not via SQL statements. mysql-test-run.pl was extended to enable running the test binary.
-
- 23 Jul, 2007 5 commits
-
-
malff/marcsql@weblab.(none) authored
In sql/sql_yacc.yy, use the %prec construct at the end of rule join_table
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B29644-5.0-opt
-
igor@olga.mysql.com authored
into olga.mysql.com:/home/igor/mysql-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
Limit the fix for bug 28591 to InnoDB only
-
igor@olga.mysql.com authored
If a primary key is defined over column c of enum type then the EXPLAIN command for a look-up query of the form SELECT * FROM t WHERE c=0 said that the query was with an impossible where condition though the query correctly returned non-empty result set when the table indeed contained rows with error empty strings for column c. This kind of misbehavior was due to a bug in the function Field_enum::store(longlong,bool) that erroneously returned 1 if the the value to be stored was equal to 0. Note that the method Field_enum::store(const char *from,uint length,CHARSET_INFO *cs) correctly returned 0 if a value of the error empty string was stored.
-
- 22 Jul, 2007 5 commits
-
-
istruewing@chilla.local authored
into chilla.local:/home/mydev/mysql-5.0-axmrg
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B28951-5.0-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/home/hf/work/29494/my41-29494
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/29494/my50-29494
-
holyfoot/hf@hfmain.(none) authored
into mysql.com:/home/hf/work/29494/my50-29494
-
- 21 Jul, 2007 2 commits
-
-
igor@olga.mysql.com authored
into olga.mysql.com:/home/igor/mysql-5.0-opt
-
gshchepa/uchum@gleb.loc authored
into gleb.loc:/home/uchum/work/bk/5.0-opt
-