An error occurred fetching the project authors.
  1. 08 Jul, 2007 1 commit
    • evgen@moonbone.local's avatar
      Bug#29310: An InnoDB table was updated when the data wasn't actually changed. · 42d1e3c4
      evgen@moonbone.local authored
      When a table is being updated it has two set of fields - fields required for
      checks of conditions and fields to be updated. A storage engine is allowed
      not to retrieve columns marked for update. Due to this fact records can't
      be compared to see whether the data has been changed or not. This makes the
      server always update records independently of data change.
      
      Now when an auto-updatable timestamp field is present and server sees that
      a table handle isn't going to retrieve write-only fields then all of such
      fields are marked as to be read to force the handler to retrieve them.
      42d1e3c4
  2. 28 Jun, 2007 1 commit
    • gkodinov/kgeorge@magare.gmz's avatar
      Bug #29157: UPDATE, changed rows incorrect · 71aaf52a
      gkodinov/kgeorge@magare.gmz authored
      Sometimes the number of really updated rows (with changed
      column values) cannot be determined at the server level
      alone (e.g. if the storage engine does not return enough
      column values to verify that). So the only dependable way
      in such cases is to let the storage engine return that
      information if possible.
      Fixed the bug at server level by providing a way for the 
      storage engine to return information about wether it 
      actually updated the row or the old and the new column 
      values are the same. It can do that by returning 
      HA_ERR_RECORD_IS_THE_SAME in ha_update_row().
      Note that each storage engine may choose not to try to
      return this status code, so this behaviour remains 
      storage engine specific.
      71aaf52a
  3. 19 Jun, 2007 1 commit
  4. 18 Jun, 2007 1 commit
    • mhansson/martin@linux-st28.site's avatar
      Bug#28677: SELECT on missing column gives extra error · cc2d534a
      mhansson/martin@linux-st28.site authored
      The method select_insert::send_error does two things, it rolls back a statement
      being executed and outputs an error message. But when a 
      nonexistent column is referenced, an error message has been published already and
      there is no need to publish another.
      Fixed by moving all functionality beyond publishing an error message into 
      select_insert::abort() and calling only that function.
      cc2d534a
  5. 12 Jun, 2007 1 commit
  6. 11 Jun, 2007 2 commits
  7. 10 Jun, 2007 1 commit
    • kostja@bodhi.(none)'s avatar
      Follow up after work on Bug 4968 · 6c352d16
      kostja@bodhi.(none) authored
      Coding style: classes start with a capital letter.
      Rename some classes related to parsing:
      create_field -> Create_field
      foreign_key -> Foreign_key
      key_part_spec -> Key_part_spec
      6c352d16
  8. 06 Jun, 2007 1 commit
    • evgen@moonbone.local's avatar
      Bug#28505: mysql_affected_rows() may return wrong result if CLIENT_FOUND_ROWS · b9090113
      evgen@moonbone.local authored
      flag is set.
      
      When the CLIENT_FOUND_ROWS flag is set then the server should return
      found number of rows independently whether they were updated or not.
      But this wasn't the case for the INSERT statement which always returned
      number of rows that were actually changed thus providing wrong info to
      the user.
      
      Now the select_insert::send_eof method and the mysql_insert function
      are sending the number of touched rows if the CLIENT_FOUND_ROWS flag is set.
      b9090113
  9. 29 May, 2007 2 commits
  10. 28 May, 2007 4 commits
    • aelkin/elkin@dsl-hkibras1-ff5dc300-70.dhcp.inet.fi's avatar
      Bug#22725 Replication outages from ER_SERVER_SHUTDOWN (1053) set in replication events · b8a5a770
        
      The reason for the bug was that replaying of a query on slave could not be possible since its event
      was recorded with the killed error. Due to the specific of handling INSERT, which per-row-while-loop is 
      unbreakable to killing, the query on transactional table should have not appeared in binlog unless
      there was  a call to a stored routine that got interrupted with killing (and then there must be an error
      returned out of the loop).
         
      The offered solution added the following rule for binlogging of INSERT that accounts the above
      specifics:
      For INSERT on transactional-table if the error was not set the only raised flag
      is harmless and is ignored via masking out on time of creation of binlog event.
         
      For both table types the combination of raised error and KILLED flag indicates that there
      was potentially partial execution on master and consistency is under the question.
      In that case the code continues to binlog an event with an appropriate killed error.
       
      The fix relies on the specified behaviour of stored routine that must propagate the error 
      to the top level query handling if the thd->killed flag was raised in the routine execution.
         
      The patch adds an arg with the default killed-status-unset value to Query_log_event::Query_log_event.
      b8a5a770
    • thek@adventure.(none)'s avatar
      Bug#24988 FLUSH PRIVILEGES causes brief unavailability · 5f06a456
      thek@adventure.(none) authored
      - A race condition caused brief unavailablility when trying to acccess
        a table. 
      - The variable 'grant_option' was removed to resolve the race condition and
        to simplify the design pattern. This flag was originally intended to optimize
        grant checks.
      5f06a456
    • jani@a88-113-38-195.elisa-laajakaista.fi's avatar
      Changed accidently added tabs back into spaces. · b0352197
      Fixed a bug that came in merge.
      b0352197
    • kostja@vajra.(none)'s avatar
      5.1 version of a fix and test cases for bugs: · c7594877
      kostja@vajra.(none) authored
      Bug#4968 ""Stored procedure crash if cursor opened on altered table"
      Bug#6895 "Prepared Statements: ALTER TABLE DROP COLUMN does nothing"
      Bug#19182 "CREATE TABLE bar (m INT) SELECT n FROM foo; doesn't work from 
      stored procedure."
      Bug#19733 "Repeated alter, or repeated create/drop, fails"
      Bug#22060 "ALTER TABLE x AUTO_INCREMENT=y in SP crashes server"
      Bug#24879 "Prepared Statements: CREATE TABLE (UTF8 KEY) produces a 
      growing key length" (this bug is not fixed in 5.0)
      
      Re-execution of CREATE DATABASE, CREATE TABLE and ALTER TABLE 
      statements in stored routines or as prepared statements caused
      incorrect results (and crashes in versions prior to 5.0.25).
      
      In 5.1 the problem occured only for CREATE DATABASE, CREATE TABLE
      SELECT and CREATE TABLE with INDEX/DATA DIRECTOY options).
        
      The problem of bugs 4968, 19733, 19282 and 6895 was that functions
      mysql_prepare_table, mysql_create_table and mysql_alter_table are not
      re-execution friendly: during their operation they modify contents
      of LEX (members create_info, alter_info, key_list, create_list),
      thus making the LEX unusable for the next execution.
      In particular, these functions removed processed columns and keys from
      create_list, key_list and drop_list. Search the code in sql_table.cc 
      for drop_it.remove() and similar patterns to find evidence.
        
      The fix is to supply to these functions a usable copy of each of the
      above structures at every re-execution of an SQL statement. 
        
      To simplify memory management, LEX::key_list and LEX::create_list
      were added to LEX::alter_info, a fresh copy of which is created for
      every execution.
        
      The problem of crashing bug 22060 stemmed from the fact that the above 
      metnioned functions were not only modifying HA_CREATE_INFO structure 
      in LEX, but also were changing it to point to areas in volatile memory
      of the execution memory root.
         
      The patch solves this problem by creating and using an on-stack
      copy of HA_CREATE_INFO in mysql_execute_command.
      
      Additionally, this patch splits the part of mysql_alter_table
      that analizes and rewrites information from the parser into
      a separate function - mysql_prepare_alter_table, in analogy with
      mysql_prepare_table, which is renamed to mysql_prepare_create_table.
      c7594877
  11. 22 May, 2007 1 commit
    • malff/marcsql@weblab.(none)'s avatar
      Bug#21554 (sp_cache.cc: violates C++ aliasing rules) · 782096db
      malff/marcsql@weblab.(none) authored
      The problem reported is a compile bug,
      reported by the development GCC team with GCC 4.2.
      
      The original issue can no longer be reproduced in MySQL 5.1,
      since the configure script no longer define HAVE_ATOMIC_ADD,
      which caused the Linux atomic functions to be used (and cause a problem
      with an invalid cast).
      
      This patch implements some code cleanup for 5.1 only, which was identified
      during the investigation of this issue.
      
      With this patch, statistics maintained in THD::status_var are by definition
      owned by the running thread, and do not need to be protected against race
      conditions. These statistics are maintained by the status_var_* helpers,
      which do not require any lock.
      782096db
  12. 16 May, 2007 3 commits
    • kostja@vajra.(none)'s avatar
      Fix a failing assert. · bb2a43dd
      kostja@vajra.(none) authored
      bb2a43dd
    • kostja@vajra.(none)'s avatar
      Correct a merge error. · 752972b5
      kostja@vajra.(none) authored
      752972b5
    • kostja@vajra.(none)'s avatar
      A fix and a test case for · 747842e1
      kostja@vajra.(none) authored
      Bug#21483 "Server abort or deadlock on INSERT DELAYED with another
      implicit insert"
      Also fixes and adds test cases for bugs:
      20497 "Trigger with INSERT DELAYED causes Error 1165"
      21714 "Wrong NEW.value and server abort on INSERT DELAYED to a
      table with a trigger".
      Post-review fixes.
      
      Problem:
      In MySQL INSERT DELAYED is a way to pipe all inserts into a
      given table through a dedicated thread. This is necessary for
      simplistic storage engines like MyISAM, which do not have internal
      concurrency control or threading and thus can not
      achieve efficient INSERT throughput without support from SQL layer.
      DELAYED INSERT works as follows:
      For every distinct table, which can accept DELAYED inserts and has
      pending data to insert, a dedicated thread is created to write data
      to disk. All user connection threads that attempt to
      delayed-insert into this table interact with the dedicated thread in
      producer/consumer fashion: all records to-be inserted are pushed
      into a queue of the dedicated thread, which fetches the records and 
      writes them.
      In this design, client connection threads never open or lock
      the delayed insert table.
      This functionality was introduced in version 3.23 and does not take 
      into account existence of triggers, views, or pre-locking.
      E.g. if INSERT DELAYED is called from a stored function, which,
      in turn, is called from another stored function that uses the delayed
      table, a deadlock can occur, because delayed locking by-passes
      pre-locking. Besides:
       * the delayed thread works directly with the subject table through
         the storage engine API and does not invoke triggers
       * even if it was patched to invoke triggers, if triggers,
         in turn, used other tables, the delayed thread would
         have to open and lock involved tables (use pre-locking).
       * even if it was patched to use pre-locking, without deadlock
         detection the delayed thread could easily lock out user 
         connection threads in case when the same table is used both
         in a trigger and on the right side of the insert query: 
         the delayed thread would not release locks until all inserts 
         are complete, and user connection can not complete inserts 
         without having locks on the tables used on the right side of the
         query.
      
      Solution:
      
      These considerations suggest two general alternatives for the
      future of INSERT DELAYED:
       * it is considered a full-fledged alternative to normal INSERT
       * it is regarded as an optimisation that is only relevant 
         for simplistic engines.
      Since we missed our chance to provide complete support of new
      features when 5.0 was in development, the first alternative
      currently renders infeasible.
      However, even the second alternative, which is to detect
      new features and convert DELAYED insert into a normal insert, 
      is not easy to implement.
      The catch-22 is that we don't know if the subject table has triggers
      or is a view before we open it, and we only open it in the
      delayed thread. We don't know if the query involves pre-locking
      until we have opened all tables, and we always first create
      the delayed thread, and only then open the remaining tables.
      This patch detects the problematic scenarios and converts
      DELAYED INSERT to a normal INSERT using the following approach:
       * if the statement is executed under pre-locking (e.g. from
         within a stored function or trigger) or the right
         side may require pre-locking, we detect the situation
         before creating a delayed insert thread and convert the statement
         to a conventional INSERT.
        * if the subject table is a view or has triggers, we shutdown
         the delayed thread and convert the statement to a conventional
         INSERT.
      747842e1
  13. 14 May, 2007 2 commits
    • mats@romeo.kindahl.net's avatar
      WL#3339 (Issue warnings when statement-based replication may fail): · 6a7925a2
      mats@romeo.kindahl.net authored
      Replacing binlog_row_based_if_mixed with variable binlog_stmt_flags
      holding several flags and adding member functions to manipulate the
      flags.
      
      Added code to generate a warning when an attempt to log an unsafe
      statement to the binary log was made. The warning is both pushed to the
      SHOW WARNINGS table and written to the error log. The prevent flooding
      the error log, the warning is just written to the error log once per
      open session.
      6a7925a2
    • kostja@vajra.(none)'s avatar
      Fix a warning. · 162e9b98
      kostja@vajra.(none) authored
      162e9b98
  14. 11 May, 2007 2 commits
    • dlenev@mockturtle.local's avatar
      Fix for: · 4cafc8ee
      dlenev@mockturtle.local authored
        Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
                    with locked tables"
        Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
        Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
        Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
                    temporary table exists"
      
      Deadlock occured when one tried to execute CREATE TABLE IF NOT
      EXISTS ... SELECT statement under LOCK TABLES which held
      read lock on target table.
      Attempt to execute the same statement for already existing
      target table with triggers caused server crashes.
      Also concurrent execution of CREATE TABLE ... SELECT statement
      and other statements involving target table suffered from
      various races (some of which might've led to deadlocks).
      Finally, attempt to execute CREATE TABLE ... SELECT in case
      when a temporary table with same name was already present
      led to the insertion of data into this temporary table and
      creation of empty non-temporary table.
       
      All above problems stemmed from the old implementation of CREATE
      TABLE ... SELECT in which we created, opened and locked target
      table without any special protection in a separate step and not
      with the rest of tables used by this statement.
      This underminded deadlock-avoidance approach used in server
      and created window for races. It also excluded target table
      from prelocking causing problems with trigger execution.
      
      The patch solves these problems by implementing new approach to
      handling of CREATE TABLE ... SELECT for base tables.
      We try to open and lock table to be created at the same time as
      the rest of tables used by this statement. If such table does not
      exist at this moment we create and place in the table cache special
      placeholder for it which prevents its creation or any other usage
      by other threads.
      We still use old approach for creation of temporary tables.
      
      Note that we have separate fix for 5.0 since there we use slightly
      different less intrusive approach.
      4cafc8ee
    • dlenev@mockturtle.local's avatar
      Fix for: · 8b93e52e
      dlenev@mockturtle.local authored
        Bug #20662 "Infinite loop in CREATE TABLE IF NOT EXISTS ... SELECT
                    with locked tables"
        Bug #20903 "Crash when using CREATE TABLE .. SELECT and triggers"
        Bug #24738 "CREATE TABLE ... SELECT is not isolated properly"
        Bug #24508 "Inconsistent results of CREATE TABLE ... SELECT when
                    temporary table exists"
       
      Deadlock occured when one tried to execute CREATE TABLE IF NOT
      EXISTS ... SELECT statement under LOCK TABLES which held
      read lock on target table.
      Attempt to execute the same statement for already existing
      target table with triggers caused server crashes.
      Also concurrent execution of CREATE TABLE ... SELECT statement
      and other statements involving target table suffered from
      various races (some of which might've led to deadlocks).
      Finally, attempt to execute CREATE TABLE ... SELECT in case
      when a temporary table with same name was already present
      led to the insertion of data into this temporary table and
      creation of empty non-temporary table.
       
      All above problems stemmed from the old implementation of CREATE
      TABLE ... SELECT in which we created, opened and locked target
      table without any special protection in a separate step and not
      with the rest of tables used by this statement.
      This underminded deadlock-avoidance approach used in server
      and created window for races. It also excluded target table
      from prelocking causing problems with trigger execution.
        
      The patch solves these problems by implementing new approach to
      handling of CREATE TABLE ... SELECT for base tables.
      We try to open and lock table to be created at the same time as
      the rest of tables used by this statement. If such table does not
      exist at this moment we create and place in the table cache special
      placeholder for it which prevents its creation or any other usage
      by other threads.
      
      We still use old approach for creation of temporary tables.
      
      Also note that we decided to postpone introduction of some tests
      for concurrent behaviour of CREATE TABLE ... SELECT till 5.1.
      The main reason for this is absence in 5.0 ability to set @@debug
      variable at runtime, which can be circumvented only by using several
      test files with individual .opt files. Since the latter is likely
      to slowdown test-suite unnecessary we chose not to push this tests
      into 5.0, but run them manually for this version and later push
      their optimized version into 5.1
      8b93e52e
  15. 10 May, 2007 4 commits
    • gshchepa/uchum@gleb.loc's avatar
      Fixed bug #28000. · 848f56b0
      gshchepa/uchum@gleb.loc authored
      Bug occurs in INSERT IGNORE ... SELECT ... ON DUPLICATE KEY UPDATE
      statements, when SELECT returns duplicated values and UPDATE clause
      tries to assign NULL values to NOT NULL fields.
      NOTE: By current design MySQL server treats INSERT IGNORE ... ON
      DUPLICATE statements as INSERT ... ON DUPLICATE with update of
      duplicated records, but MySQL manual lacks this information.
      After this fix such behaviour becomes legalized.
      
      The write_record() function was returning error values even within
      INSERT IGNORE, because ignore_errors parameter of
      the fill_record_n_invoke_before_triggers() function call was
      always set to FALSE. FALSE is replaced by info->ignore.
      848f56b0
    • kostja@vajra.(none)'s avatar
      0d6e93e0
    • kostja@vajra.(none)'s avatar
      No semantical change. Move checks of compatibility · 41239f1f
      kostja@vajra.(none) authored
      of requested lock type and requested table operation from 
      mysql_insert into a separate function.
      41239f1f
    • monty@mysql.com/narttu.mysql.fi's avatar
      WL#3817: Simplify string / memory area types and make things more consistent (first part) · 088e2395
      monty@mysql.com/narttu.mysql.fi authored
      The following type conversions was done:
      
      - Changed byte to uchar
      - Changed gptr to uchar*
      - Change my_string to char *
      - Change my_size_t to size_t
      - Change size_s to size_t
      
      Removed declaration of byte, gptr, my_string, my_size_t and size_s. 
      
      Following function parameter changes was done:
      - All string functions in mysys/strings was changed to use size_t
        instead of uint for string lengths.
      - All read()/write() functions changed to use size_t (including vio).
      - All protocoll functions changed to use size_t instead of uint
      - Functions that used a pointer to a string length was changed to use size_t*
      - Changed malloc(), free() and related functions from using gptr to use void *
        as this requires fewer casts in the code and is more in line with how the
        standard functions work.
      - Added extra length argument to dirname_part() to return the length of the
        created string.
      - Changed (at least) following functions to take uchar* as argument:
        - db_dump()
        - my_net_write()
        - net_write_command()
        - net_store_data()
        - DBUG_DUMP()
        - decimal2bin() & bin2decimal()
      - Changed my_compress() and my_uncompress() to use size_t. Changed one
        argument to my_uncompress() from a pointer to a value as we only return
        one value (makes function easier to use).
      - Changed type of 'pack_data' argument to packfrm() to avoid casts.
      - Changed in readfrm() and writefrom(), ha_discover and handler::discover()
        the type for argument 'frmdata' to uchar** to avoid casts.
      - Changed most Field functions to use uchar* instead of char* (reduced a lot of
        casts).
      - Changed field->val_xxx(xxx, new_ptr) to take const pointers.
      
      Other changes:
      - Removed a lot of not needed casts
      - Added a few new cast required by other changes
      - Added some cast to my_multi_malloc() arguments for safety (as string lengths
        needs to be uint, not size_t).
      - Fixed all calls to hash-get-key functions to use size_t*. (Needed to be done
        explicitely as this conflict was often hided by casting the function to
        hash_get_key).
      - Changed some buffers to memory regions to uchar* to avoid casts.
      - Changed some string lengths from uint to size_t.
      - Changed field->ptr to be uchar* instead of char*. This allowed us to
        get rid of a lot of casts.
      - Some changes from true -> TRUE, false -> FALSE, unsigned char -> uchar
      - Include zlib.h in some files as we needed declaration of crc32()
      - Changed MY_FILE_ERROR to be (size_t) -1.
      - Changed many variables to hold the result of my_read() / my_write() to be
        size_t. This was needed to properly detect errors (which are
        returned as (size_t) -1).
      - Removed some very old VMS code
      - Changed packfrm()/unpackfrm() to not be depending on uint size
        (portability fix)
      - Removed windows specific code to restore cursor position as this
        causes slowdown on windows and we should not mix read() and pread()
        calls anyway as this is not thread safe. Updated function comment to
        reflect this. Changed function that depended on original behavior of
        my_pwrite() to itself restore the cursor position (one such case).
      - Added some missing checking of return value of malloc().
      - Changed definition of MOD_PAD_CHAR_TO_FULL_LENGTH to avoid 'long' overflow.
      - Changed type of table_def::m_size from my_size_t to ulong to reflect that
        m_size is the number of elements in the array, not a string/memory
        length.
      - Moved THD::max_row_length() to table.cc (as it's not depending on THD).
        Inlined max_row_length_blob() into this function.
      - More function comments
      - Fixed some compiler warnings when compiled without partitions.
      - Removed setting of LEX_STRING() arguments in declaration (portability fix).
      - Some trivial indentation/variable name changes.
      - Some trivial code simplifications:
        - Replaced some calls to alloc_root + memcpy to use
          strmake_root()/strdup_root().
        - Changed some calls from memdup() to strmake() (Safety fix)
        - Simpler loops in client-simple.c
      088e2395
  16. 12 Apr, 2007 1 commit
    • mats@romeo.(none)'s avatar
      BUG#25688 (RBR: circular replication may cause STMT_END_F flags to be · 11fc24ef
      mats@romeo.(none) authored
      skipped):
      
      By moving statement end actions from Rows_log_event::do_apply_event() to
      Rows_log_event::do_update_pos() they will always be executed, even if
      Rows_log_event::do_apply_event() is skipped because the event originated
      at the same server. This because Rows_log_event::do_update_pos() is always
      executed (unless Rows_log_event::do_apply_event() failed with an error,
      in which case the slave stops with an error anyway). 
      
      Adding test case.
      
      Fixing logic to detect if inside a group. If a rotate event occured
      when an initial prefix of events for a statement, but for which the
      table did contain a key, last_event_start_time is set to zero, causing
      rotate to end the group but without unlocking any tables. This left a
      lock hanging around, which subsequently triggered an assertion when a
      second attempt was made to lock the same sequence of tables.
      
      In order to solve the above problem, a new flag was added to the relay
      log info structure that is used to indicate that the replication thread
      is currently executing a statement. Using this flag, the replication
      thread is in a group if it is either in a statement or inside a trans-
      action.
      
      The patch also eliminates some gratuitous header file inclusions that
      were not needed (and caused compile errors) and replaced them with
      forward definitions.
      11fc24ef
  17. 04 Apr, 2007 2 commits
    • mskold/marty@mysql.com/linux.site's avatar
      Merge from 5.0 · 6c8f5c58
      mskold/marty@mysql.com/linux.site authored
      6c8f5c58
    • mskold/marty@mysql.com/linux.site's avatar
      Bug #26242 UPDATE with subquery and triggers failing with cluster tables · 625a2629
      mskold/marty@mysql.com/linux.site authored
      In certain cases AFTER UPDATE/DELETE triggers on NDB tables that referenced
      subject table didn't see the results of operation which caused invocation
      of those triggers. In other words AFTER trigger invoked as result of update
      (or deletion) of particular row saw version of this row before update (or
      deletion).
      
      The problem occured because NDB handler in those cases postponed actual
      update/delete operations to be able to perform them later as one batch.
      
      This fix solves the problem by disabling this optimization for particular
      operation if subject table has AFTER trigger for this operation defined.
      To achieve this we introduce two new flags for handler::extra() method:
      HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH.
      These are called if there exists AFTER DELETE/UPDATE triggers during a
      statement that potentially can generate calls to delete_row()/update_row().
      This includes multi_delete/multi_update statements as well as insert statements
      that do delete/update as part of an ON DUPLICATE statement.
      625a2629
  18. 30 Mar, 2007 1 commit
    • evgen@sunlight.local's avatar
      Bug#23233: 0 as LAST_INSERT_ID() after INSERT .. ON DUPLICATE in the · 7c42232d
      evgen@sunlight.local authored
      NO_AUTO_VALUE_ON_ZERO mode.
      
      In the NO_AUTO_VALUE_ON_ZERO mode the table->auto_increment_field_not_null
      variable is used to indicate that a non-NULL value was specified by the user
      for an auto_increment column. When an INSERT .. ON DUPLICATE updates the
      auto_increment field this variable is set to true and stays unchanged for the
      next insert operation. This makes the next inserted row sometimes wrongly have
      0 as the value of the auto_increment field.
      
      Now the fill_record() function resets the table->auto_increment_field_not_null
      variable before filling the record.
      The table->auto_increment_field_not_null variable is also reset by the
      open_table() function for a case if we missed some auto_increment_field_not_null
      handling bug.
      Now the table->auto_increment_field_not_null is reset at the end of the
      mysql_load() function.
      
      Reset the table->auto_increment_field_not_null variable after each
      write_row() call in the copy_data_between_tables() function.
      
      7c42232d
  19. 29 Mar, 2007 1 commit
  20. 23 Mar, 2007 1 commit
    • aelkin/elkin@andrepl.(none)'s avatar
      Bug #27395 OPTION_STATUS_NO_TRANS_UPDATE is not preserved at the end of SF() · 2afa90b5
      aelkin/elkin@andrepl.(none) authored
      thd->options' OPTION_STATUS_NO_TRANS_UPDATE bit was not restored at the end of SF() invocation, where
      SF() modified non-ta table.
      As the result of this artifact it was not possible to detect whether there were any side-effects when
      top-level query ends. 
      If the top level query table was not modified and the bit is lost there would be no binlogging.
      
      Fixed with preserving the bit inside of thd->no_trans_update struct. The struct agregates two bool flags
      telling whether the current query and the current transaction modified any non-ta table.
      The flags stmt, all are dropped at the end of the query and the transaction.
      2afa90b5
  21. 22 Mar, 2007 1 commit
    • igor@olga.mysql.com's avatar
      Fixed bug #27229: crash when a set function aggregated in outer · 8f9178e8
      igor@olga.mysql.com authored
      context was used as an argument of GROUP_CONCAT.
      Ensured correct setting of the depended_from field in references
      generated for set functions aggregated in outer selects.
      A wrong value of this field resulted in wrong maps returned by 
      used_tables() for these references.
      Made sure that a temporary table field is added for any set function
      aggregated in outer context when creation of a temporary table is 
      needed to execute the inner subquery. 
      8f9178e8
  22. 20 Mar, 2007 1 commit
    • gkodinov/kgeorge@macbook.local's avatar
      Bug #24484: · 28962a76
      gkodinov/kgeorge@macbook.local authored
      To correctly decide which predicates can be evaluated with a given table
      the optimizer must know the exact set of tables that a predicate depends 
      on. If that mask is too wide (refer to non-existing tables) the optimizer
      can erroneously skip a predicate.
      One such case of wrong table usage mask were the aggregate functions.
      The have a all-1 mask (meaning depend on all tables, including non-existent
      ones).
      Fixed by making a real used_tables mask for the aggregates. The mask is
      constructed in the following way :
      1. OR the table dependency masks of all the arguments of the aggregate.
      2. If all the arguments of the function are from the local name resolution 
        context and it is evaluated in the same name resolution
        context where it is referenced all the tables from that name resolution 
        context are OR-ed to the dependency mask. This is to denote that an
        aggregate function depends on the number of rows it processes.
      3. Handle correctly the case of an aggregate function optimization (such that
        the aggregate function can be pre-calculated and made a constant).
      
      Made sure that an aggregate function is never a constant (unless subject of a 
      specific optimization and pre-calculation).  
      
      One other flaw was revealed and fixed in the process : references were 
      not calling the recalculation method for used_tables of their targets.
      28962a76
  23. 19 Mar, 2007 2 commits
    • evgen@moonbone.local's avatar
      sql_insert.cc: · 41c73249
      evgen@moonbone.local authored
        After merge fix.
      41c73249
    • evgen@moonbone.local's avatar
      sql_insert.cc: · 31b9145a
      evgen@moonbone.local authored
        Removed wrong fix for the bug#27006.
        The bug was added by the fix for the bug#19978 and fixed by Monty on 2007/02/21.
      trigger.test, trigger.result:
        Corrected test case for the bug#27006.
      31b9145a
  24. 16 Mar, 2007 3 commits