An error occurred fetching the project authors.
  1. 07 Nov, 2006 1 commit
    • bar@mysql.com/bar.intranet.mysql.r18.ru's avatar
      Bug#23451 GROUP_CONCAT truncates a multibyte utf8 character · 599b7316
        
        Problem: GROUP_CONCAT on a multi-byte column can truncate
        in the middle of a multibyte character when applying
        group_concat_max_len limit. It produces an invalid
        multi-byte character in the result string.
        
      The second, easier version - reusing old "warning_for_row" flag,
      instead of introducing of "result_is_full" - which was
      added in the previous commit.
      599b7316
  2. 16 Oct, 2006 1 commit
    • gkodinov/kgeorge@macbook.gmz's avatar
      BUG#14019 : group by converts literal string to column name · 11561638
      gkodinov/kgeorge@macbook.gmz authored
         When resolving unqualified name references MySQL was not
         checking what is the item type for the reference. Thus
         e.g a string literal item that has by convention a name
         equal to its string value will also work as a reference to 
         a SELECT list item or a table field.
         Fixed by allowing only Item_ref or Item_field to referenced by
         (unqualified) name.
      11561638
  3. 20 Sep, 2006 1 commit
    • igor@rurik.mysql.com's avatar
      Fixed bug #22015: crash with GROUP_CONCAT over a derived table · f2225cab
      igor@rurik.mysql.com authored
      that returns the results of aggregation by GROUP_CONCAT.
      The crash was due to an overflow happened for the field
      sortoder->length.
      The fix prevents this overflow exploiting the fact that the
      value of sortoder->length cannot be greater than the value of
      thd->variables.max_sort_length.   
      f2225cab
  4. 10 Aug, 2006 1 commit
  5. 28 Jul, 2006 1 commit
    • sergefp@mysql.com's avatar
      BUG#14940 "MySQL choose wrong index", v.2 · 699291a8
      sergefp@mysql.com authored
      - Make the range-et-al optimizer produce E(#table records after table 
                                                 condition is applied),
      - Make the join optimizer use this value,
      - Add "filtered" column to EXPLAIN EXTENDED to show 
        fraction of records left after table condition is applied
      - Adjust test results, add comments
      699291a8
  6. 25 Jul, 2006 1 commit
  7. 04 Jun, 2006 1 commit
    • monty@mysql.com's avatar
      This changeset is largely a handler cleanup changeset (WL#3281), but includes... · 74cc73d4
      monty@mysql.com authored
      This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes
      
      Changes that requires code changes in other code of other storage engines.
      (Note that all changes are very straightforward and one should find all issues
      by compiling a --debug build and fixing all compiler errors and all
      asserts in field.cc while running the test suite),
      
      - New optional handler function introduced: reset()
        This is called after every DML statement to make it easy for a handler to
        statement specific cleanups.
        (The only case it's not called is if force the file to be closed)
      
      - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before
        should be moved to handler::reset()
      
      - table->read_set contains a bitmap over all columns that are needed
        in the query.  read_row() and similar functions only needs to read these
        columns
      - table->write_set contains a bitmap over all columns that will be updated
        in the query. write_row() and update_row() only needs to update these
        columns.
        The above bitmaps should now be up to date in all context
        (including ALTER TABLE, filesort()).
      
        The handler is informed of any changes to the bitmap after
        fix_fields() by calling the virtual function
        handler::column_bitmaps_signal(). If the handler does caching of
        these bitmaps (instead of using table->read_set, table->write_set),
        it should redo the caching in this code. as the signal() may be sent
        several times, it's probably best to set a variable in the signal
        and redo the caching on read_row() / write_row() if the variable was
        set.
      
      - Removed the read_set and write_set bitmap objects from the handler class
      
      - Removed all column bit handling functions from the handler class.
        (Now one instead uses the normal bitmap functions in my_bitmap.c instead
        of handler dedicated bitmap functions)
      
      - field->query_id is removed. One should instead instead check
        table->read_set and table->write_set if a field is used in the query.
      
      - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and
        handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now
        instead use table->read_set to check for which columns to retrieve.
      
      - If a handler needs to call Field->val() or Field->store() on columns
        that are not used in the query, one should install a temporary
        all-columns-used map while doing so. For this, we provide the following
        functions:
      
        my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set);
        field->val();
        dbug_tmp_restore_column_map(table->read_set, old_map);
      
        and similar for the write map:
      
        my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set);
        field->val();
        dbug_tmp_restore_column_map(table->write_set, old_map);
      
        If this is not done, you will sooner or later hit a DBUG_ASSERT
        in the field store() / val() functions.
        (For not DBUG binaries, the dbug_tmp_restore_column_map() and
        dbug_tmp_restore_column_map() are inline dummy functions and should
        be optimized away be the compiler).
      
      - If one needs to temporary set the column map for all binaries (and not
        just to avoid the DBUG_ASSERT() in the Field::store() / Field::val()
        methods) one should use the functions tmp_use_all_columns() and
        tmp_restore_column_map() instead of the above dbug_ variants.
      
      - All 'status' fields in the handler base class (like records,
        data_file_length etc) are now stored in a 'stats' struct. This makes
        it easier to know what status variables are provided by the base
        handler.  This requires some trivial variable names in the extra()
        function.
      
      - New virtual function handler::records().  This is called to optimize
        COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true.
        (stats.records is not supposed to be an exact value. It's only has to
        be 'reasonable enough' for the optimizer to be able to choose a good
        optimization path).
      
      - Non virtual handler::init() function added for caching of virtual
        constants from engine.
      
      - Removed has_transactions() virtual method. Now one should instead return
        HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support
        transactions.
      
      - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument
        that is to be used with 'new handler_name()' to allocate the handler
        in the right area.  The xxxx_create_handler() function is also
        responsible for any initialization of the object before returning.
      
        For example, one should change:
      
        static handler *myisam_create_handler(TABLE_SHARE *table)
        {
          return new ha_myisam(table);
        }
      
        ->
      
        static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root)
        {
          return new (mem_root) ha_myisam(table);
        }
      
      - New optional virtual function: use_hidden_primary_key().
        This is called in case of an update/delete when
        (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined
        but we don't have a primary key. This allows the handler to take precisions
        in remembering any hidden primary key to able to update/delete any
        found row. The default handler marks all columns to be read.
      
      - handler::table_flags() now returns a ulonglong (to allow for more flags).
      
      - New/changed table_flags()
        - HA_HAS_RECORDS	    Set if ::records() is supported
        - HA_NO_TRANSACTIONS	    Set if engine doesn't support transactions
        - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE
                                  Set if we should mark all primary key columns for
      			    read when reading rows as part of a DELETE
      			    statement. If there is no primary key,
      			    all columns are marked for read.
        - HA_PARTIAL_COLUMN_READ  Set if engine will not read all columns in some
      			    cases (based on table->read_set)
       - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS
         			    Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION.
       - HA_DUPP_POS              Renamed to HA_DUPLICATE_POS
       - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE
      			    Set this if we should mark ALL key columns for
      			    read when when reading rows as part of a DELETE
      			    statement. In case of an update we will mark
      			    all keys for read for which key part changed
      			    value.
        - HA_STATS_RECORDS_IS_EXACT
      			     Set this if stats.records is exact.
      			     (This saves us some extra records() calls
      			     when optimizing COUNT(*))
      			    
      
      - Removed table_flags()
        - HA_NOT_EXACT_COUNT     Now one should instead use HA_HAS_RECORDS if
      			   handler::records() gives an exact count() and
      			   HA_STATS_RECORDS_IS_EXACT if stats.records is exact.
        - HA_READ_RND_SAME	   Removed (no one supported this one)
      
      - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk()
      
      - Renamed handler::dupp_pos to handler::dup_pos
      
      - Removed not used variable handler::sortkey
      
      
      Upper level handler changes:
      
      - ha_reset() now does some overall checks and calls ::reset()
      - ha_table_flags() added. This is a cached version of table_flags(). The
        cache is updated on engine creation time and updated on open.
      
      
      MySQL level changes (not obvious from the above):
      
      - DBUG_ASSERT() added to check that column usage matches what is set
        in the column usage bit maps. (This found a LOT of bugs in current
        column marking code).
      
      - In 5.1 before, all used columns was marked in read_set and only updated
        columns was marked in write_set. Now we only mark columns for which we
        need a value in read_set.
      
      - Column bitmaps are created in open_binary_frm() and open_table_from_share().
        (Before this was in table.cc)
      
      - handler::table_flags() calls are replaced with handler::ha_table_flags()
      
      - For calling field->val() you must have the corresponding bit set in
        table->read_set. For calling field->store() you must have the
        corresponding bit set in table->write_set. (There are asserts in
        all store()/val() functions to catch wrong usage)
      
      - thd->set_query_id is renamed to thd->mark_used_columns and instead
        of setting this to an integer value, this has now the values:
        MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE
        Changed also all variables named 'set_query_id' to mark_used_columns.
      
      - In filesort() we now inform the handler of exactly which columns are needed
        doing the sort and choosing the rows.
      
      - The TABLE_SHARE object has a 'all_set' column bitmap one can use
        when one needs a column bitmap with all columns set.
        (This is used for table->use_all_columns() and other places)
      
      - The TABLE object has 3 column bitmaps:
        - def_read_set     Default bitmap for columns to be read
        - def_write_set    Default bitmap for columns to be written
        - tmp_set          Can be used as a temporary bitmap when needed.
        The table object has also two pointer to bitmaps read_set and write_set
        that the handler should use to find out which columns are used in which way.
      
      - count() optimization now calls handler::records() instead of using
        handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true).
      
      - Added extra argument to Item::walk() to indicate if we should also
        traverse sub queries.
      
      - Added TABLE parameter to cp_buffer_from_ref()
      
      - Don't close tables created with CREATE ... SELECT but keep them in
        the table cache. (Faster usage of newly created tables).
      
      
      New interfaces:
      
      - table->clear_column_bitmaps() to initialize the bitmaps for tables
        at start of new statements.
      
      - table->column_bitmaps_set() to set up new column bitmaps and signal
        the handler about this.
      
      - table->column_bitmaps_set_no_signal() for some few cases where we need
        to setup new column bitmaps but don't signal the handler (as the handler
        has already been signaled about these before). Used for the momement
        only in opt_range.cc when doing ROR scans.
      
      - table->use_all_columns() to install a bitmap where all columns are marked
        as use in the read and the write set.
      
      - table->default_column_bitmaps() to install the normal read and write
        column bitmaps, but not signaling the handler about this.
        This is mainly used when creating TABLE instances.
      
      - table->mark_columns_needed_for_delete(),
        table->mark_columns_needed_for_delete() and
        table->mark_columns_needed_for_insert() to allow us to put additional
        columns in column usage maps if handler so requires.
        (The handler indicates what it neads in handler->table_flags())
      
      - table->prepare_for_position() to allow us to tell handler that it
        needs to read primary key parts to be able to store them in
        future table->position() calls.
        (This replaces the table->file->ha_retrieve_all_pk function)
      
      - table->mark_auto_increment_column() to tell handler are going to update
        columns part of any auto_increment key.
      
      - table->mark_columns_used_by_index() to mark all columns that is part of
        an index.  It will also send extra(HA_EXTRA_KEYREAD) to handler to allow
        it to quickly know that it only needs to read colums that are part
        of the key.  (The handler can also use the column map for detecting this,
        but simpler/faster handler can just monitor the extra() call).
      
      - table->mark_columns_used_by_index_no_reset() to in addition to other columns,
        also mark all columns that is used by the given key.
      
      - table->restore_column_maps_after_mark_index() to restore to default
        column maps after a call to table->mark_columns_used_by_index().
      
      - New item function register_field_in_read_map(), for marking used columns
        in table->read_map. Used by filesort() to mark all used columns
      
      - Maintain in TABLE->merge_keys set of all keys that are used in query.
        (Simplices some optimization loops)
      
      - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key
        but the field in the clustered key is not assumed to be part of all index.
        (used in opt_range.cc for faster loops)
      
      -  dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map()
         tmp_use_all_columns() and tmp_restore_column_map() functions to temporally
         mark all columns as usable.  The 'dbug_' version is primarily intended
         inside a handler when it wants to just call Field:store() & Field::val()
         functions, but don't need the column maps set for any other usage.
         (ie:: bitmap_is_set() is never called)
      
      - We can't use compare_records() to skip updates for handlers that returns
        a partial column set and the read_set doesn't cover all columns in the
        write set. The reason for this is that if we have a column marked only for
        write we can't in the MySQL level know if the value changed or not.
        The reason this worked before was that MySQL marked all to be written
        columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden
        bug'.
      
      - open_table_from_share() does not anymore setup temporary MEM_ROOT
        object as a thread specific variable for the handler. Instead we
        send the to-be-used MEMROOT to get_new_handler().
        (Simpler, faster code)
      
      
      
      Bugs fixed:
      
      - Column marking was not done correctly in a lot of cases.
        (ALTER TABLE, when using triggers, auto_increment fields etc)
        (Could potentially result in wrong values inserted in table handlers
        relying on that the old column maps or field->set_query_id was correct)
        Especially when it comes to triggers, there may be cases where the
        old code would cause lost/wrong values for NDB and/or InnoDB tables.
      
      - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags:
        OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG.
        This allowed me to remove some wrong warnings about:
        "Some non-transactional changed tables couldn't be rolled back"
      
      - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset
        (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose
        some warnings about
        "Some non-transactional changed tables couldn't be rolled back")
      
      - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table()
        which could cause delete_table to report random failures.
      
      - Fixed core dumps for some tests when running with --debug
      
      - Added missing FN_LIBCHAR in mysql_rm_tmp_tables()
        (This has probably caused us to not properly remove temporary files after
        crash)
      
      - slow_logs was not properly initialized, which could maybe cause
        extra/lost entries in slow log.
      
      - If we get an duplicate row on insert, change column map to read and
        write all columns while retrying the operation. This is required by
        the definition of REPLACE and also ensures that fields that are only
        part of UPDATE are properly handled.  This fixed a bug in NDB and
        REPLACE where REPLACE wrongly copied some column values from the replaced
        row.
      
      - For table handler that doesn't support NULL in keys, we would give an error
        when creating a primary key with NULL fields, even after the fields has been
        automaticly converted to NOT NULL.
      
      - Creating a primary key on a SPATIAL key, would fail if field was not
        declared as NOT NULL.
      
      
      Cleanups:
      
      - Removed not used condition argument to setup_tables
      
      - Removed not needed item function reset_query_id_processor().
      
      - Field->add_index is removed. Now this is instead maintained in
        (field->flags & FIELD_IN_ADD_INDEX)
      
      - Field->fieldnr is removed (use field->field_index instead)
      
      - New argument to filesort() to indicate that it should return a set of
        row pointers (not used columns). This allowed me to remove some references
        to sql_command in filesort and should also enable us to return column
        results in some cases where we couldn't before.
      
      - Changed column bitmap handling in opt_range.cc to be aligned with TABLE
        bitmap, which allowed me to use bitmap functions instead of looping over
        all fields to create some needed bitmaps. (Faster and smaller code)
      
      - Broke up found too long lines
      
      - Moved some variable declaration at start of function for better code
        readability.
      
      - Removed some not used arguments from functions.
        (setup_fields(), mysql_prepare_insert_check_table())
      
      - setup_fields() now takes an enum instead of an int for marking columns
         usage.
      
      - For internal temporary tables, use handler::write_row(),
        handler::delete_row() and handler::update_row() instead of
        handler::ha_xxxx() for faster execution.
      
      - Changed some constants to enum's and define's.
      
      - Using separate column read and write sets allows for easier checking
        of timestamp field was set by statement.
      
      - Remove calls to free_io_cache() as this is now done automaticly in ha_reset()
      
      - Don't build table->normalized_path as this is now identical to table->path
        (after bar's fixes to convert filenames)
      
      - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to
        do comparision with the 'convert-dbug-for-diff' tool.
      
      
      Things left to do in 5.1:
      
      - We wrongly log failed CREATE TABLE ... SELECT in some cases when using
        row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result)
        Mats has promised to look into this.
      
      - Test that my fix for CREATE TABLE ... SELECT is indeed correct.
        (I added several test cases for this, but in this case it's better that
        someone else also tests this throughly).
        Lars has promosed to do this.
      74cc73d4
  8. 21 Apr, 2006 1 commit
  9. 20 Apr, 2006 2 commits
  10. 19 Apr, 2006 2 commits
  11. 12 Apr, 2006 1 commit
    • evgen@moonbone.local's avatar
      Fixed bug#14169: type of group_concat() result changed to blob if tmp_table was · ac54aa2a
      evgen@moonbone.local authored
      used
      
      In a simple queries a result of the GROUP_CONCAT() function was always of 
      varchar type.
      But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
      table is used during select then the result is converted to blob, due to
      policy to not to store fields longer than 512 chars in tmp table as varchar
      fields.
      
      In order to provide consistent behaviour, result of GROUP_CONCAT() now
      will always be converted to blob if it is longer than 512 chars.
      Item_func_group_concat::field_type() is modified accordingly.
      ac54aa2a
  12. 07 Apr, 2006 1 commit
  13. 29 Mar, 2006 1 commit
    • evgen@moonbone.local's avatar
      Fixed bug#15560: GROUP_CONCAT wasn't ready for WITH ROLLUP queries · 1c13e548
      evgen@moonbone.local authored
      The GROUP_CONCAT uses its own temporary table. When ROLLUP is present
      it creates the second copy of Item_func_group_concat. This copy receives the
      same list of arguments that original group_concat does. When the copy is
      set up the result_fields of functions from the argument list are reset to the
      temporary table of this copy.
      As a result of this action data from functions flow directly to the ROLLUP copy
      and the original group_concat functions shows wrong result.
      Since queries with COUNT(DISTINCT ...) use temporary tables to store
      the results the COUNT function they are also affected by this bug.
      
      The idea of the fix is to copy content of the result_field for the function
      under GROUP_CONCAT/COUNT from  the first temporary table to the second one,
      rather than setting result_field to point to the second temporary table.
      To achieve this goal force_copy_fields flag is added to Item_func_group_concat
      and Item_sum_count_distinct classes. This flag is initialized to 0 and set to 1
      into the make_unique() member function of both classes.
      To the TMP_TABLE_PARAM structure is modified to include the similar flag as
      well.
      The create_tmp_table() function passes that flag to create_tmp_field().
      When the flag is set the create_tmp_field() function will set result_field
      as a source field and will not reset that result field to newly created 
      field for Item_func_result_field and its descendants. Due to this there
      will be created copy func to copy data from old result_field to newly 
      created field.
      1c13e548
  14. 22 Feb, 2006 1 commit
  15. 21 Jan, 2006 1 commit
  16. 18 Nov, 2005 1 commit
  17. 02 Nov, 2005 1 commit
  18. 15 Oct, 2005 2 commits
  19. 07 Sep, 2005 1 commit
  20. 05 Sep, 2005 1 commit
  21. 31 Aug, 2005 1 commit
    • evgen@moonbone.local's avatar
      Fix bug #12861 client hang with group_concat insubquery FROM DUAL. · f1fb30a1
      evgen@moonbone.local authored
      Item_func_group_concat::fix_fields() set maybe_null flag to 0, and set it to
      1 only if some of it's arguments may be null. When used in subquery in tmp 
      table created field which can't be null. When no data retireved result field
      have to be set to null and error mentioned in bug report occurs. Also this 
      bug can occur if selecting from not null field in empty table.
      
      Function group_concat now marked maybe_null from the very beginning not only
      if some of it's argument may be null.
      f1fb30a1
  22. 30 Aug, 2005 1 commit
    • bar@mysql.com's avatar
      Bug #12829 · 98581508
      bar@mysql.com authored
      Cannot convert the charset of a GROUP_CONCAT result:
      
      item_sum.cc:
        "result" character set was not set into proper value.
      
      func_gconcat.result, func_gconcat.test:
        Fixing tests accordingly.
      98581508
  23. 29 Jul, 2005 1 commit
    • igor@rurik.mysql.com's avatar
      func_gconcat.result, func_gconcat.test: · a5f2c752
      igor@rurik.mysql.com authored
        Added a test case for bug #12095.
      sql_class.h:
        Fixed bug #12095: a join query with GROUP_CONCAT over a single row table.
        Added a flag to the TMP_TABLE_PARAM class forcing to put constant
        items generated after elimination of a single row table into temp table
        in some cases (e.g. when GROUP_CONCAT is calculated over a single row
        table).
        bk ci sql/item_sum.cc
        Fixed bug #12095: a join query with GROUP_CONCAT over a single row table.
        If GROUP_CONCAT is calculated we always put its argument into a temp
        table, even when the argument is a constant item.
      sql_select.cc:
        Fixed bug #12095: a join query with GROUP_CONCAT over one row table.
        If temp table is used to calculate GROUP_CONCAT the argument should
        be always put into this table, even when it is a constant item.
      a5f2c752
  24. 26 Jul, 2005 2 commits
    • bar@mysql.com's avatar
      Bug#10201 group_concat returns string with binary collation · 991e3442
      bar@mysql.com authored
      item.cc:
        After merge fixes.
      func_gconcat.result:
        After merge fixes
      991e3442
    • bar@mysql.com's avatar
      func_gconcat.result, func_gconcat.test: · 0c2035b7
      bar@mysql.com authored
        Adding test
      item_sum.cc:
        Adding a call for collation/charset aggregation,
            to collect attributes from the arguments. The actual bug fix.
      item_func.h, item_func.cc, item.h, item.cc:
        - Removing collation aggrgation functions from Item_func class
            in item.cc, and adding it as non-class functions in item.cc
            to be able to reuse this code for group_concat.
            - Adding replacement for these functions into Item_func class
            as wrappers for moved functions, to minizize patch size,
      0c2035b7
  25. 03 Jun, 2005 1 commit
    • monty@mysql.com's avatar
      Move USE_PRAGMA_IMPLEMENTATION to proper place · 29fd1f2f
      monty@mysql.com authored
      Ensure that 'null_value' is not accessed before val() is called in FIELD() functions
      Fixed initialization of key maps. This fixes some problems with keys when you have more than 64 keys
      Fixed that ROLLUP don't always create a temporary table. This fix ensures that func_gconcat.test results are now predictable
      29fd1f2f
  26. 31 May, 2005 1 commit
  27. 17 Mar, 2005 2 commits
  28. 16 Mar, 2005 1 commit
  29. 15 Jan, 2005 2 commits
  30. 10 Nov, 2004 1 commit
  31. 10 Oct, 2004 1 commit
  32. 01 Sep, 2004 1 commit
  33. 23 Aug, 2004 1 commit
  34. 13 Aug, 2004 1 commit