- 27 Sep, 2007 5 commits
-
-
marko authored
if-else with switch.
-
marko authored
-
marko authored
Since r1905, innobase_rec_to_mysql() does not require a clustered index record. row_merge_dup_t: Remove old_table. row_merge_dup_report(): Do not fetch the clustered index record. Simply convert the tuple by innobase_rec_to_mysql(). row_merge_blocks(), row_merge(), row_merge_sort(): Add a TABLE* parameter for reporting duplicate key values during file sort. row_merge_read_clustered_index(): Replace UNIV_PAGE_SIZE with the more appropriate sizeof(mrec_buf_t).
-
marko authored
clustered or secondary. Remove the rec_offs_validate() assertion, because the function may be passed a mrec_t* that would fail the check.
-
marko authored
This should have been done in r1903, where the minimum size of row_merge_block_t was noted to be UNIV_PAGE_SIZE.
-
- 26 Sep, 2007 8 commits
-
-
marko authored
row_merge_buf_add(): Add ut_ad(data_size < sizeof(row_merge_block_t)) and document why it may fail if sizeof row_merge_block_t < UNIV_PAGE_SIZE.
-
marko authored
row_merge_dup_report(): Do not call innobase_rec_reset().
-
marko authored
dtuple_create_for_mysql(), dtuple_free_for_mysql(): Remove. ha_innobase::records_in_range(): Use mem_heap_create(), mem_heap_free(), and dtuple_create() instead of the removed functions above. Since r1587, InnoDB C++ functions can invoke inlined C functions.
-
marko authored
innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
-
marko authored
dict_find_index_by_max_id(): Rename this static function to its only caller, dict_table_get_index_by_max_id(). dict_table_get_index_by_max_id(): Copy the function comment from dict_find_index_by_max_id().
-
marko authored
rec_get_converted_size_comp(), rec_convert_dtuple_to_rec_comp(), rec_convert_dtuple_to_rec_new(), rec_convert_dtuple_to_rec(): Add a const qualifier to dict_index_t*. row_search_on_row_ref(): Add const qualifiers to the dict_table_t* and dtuple_t* parameters. Note that pcur is an "out" parameter and mtr is "in/out".
-
marko authored
row_build_row_ref_fast(): Note that "ref" is an in/out parameter. row_build_row_ref_from_row(): Add const qualifiers to all "in" parameters.
-
marko authored
dtuple_create(): Simplify a pointer expression. Flag the fields uninitialized after initializing them in the debug version. dtuple_t: Only declare magic_n if UNIV_DEBUG is defined. The field is not assigned to nor tested unless UNIV_DEBUG is defined.
-
- 25 Sep, 2007 1 commit
-
-
marko authored
to avoid a rec_get_offsets() call. Add some const qualifiers. row_sel_get_clust_rec_for_mysql(): Note that "offsets" will also be an input parameter.
-
- 24 Sep, 2007 6 commits
-
-
marko authored
dict_index_t* and dict_table_t* parameters of some functions.
-
vasil authored
Copy any data (currently table name and table index) that may be destroyed after releasing the kernel mutex into internal cache's storage. This is done in efficient manner using ha_storage type and a given string is copied only once into the cache's storage. Later additions of the same string use the already stored string, thus allocating memory only once per unique string. Approved by: Marko
-
vasil authored
Add a type that stores chunks of data in its own storage and avoids duplicates. Supported methods: ha_storage_create() Allocates new storage object. ha_storage_put() Copies a given data chunk into the storage and returns pointer to the copy. If the data chunk is already present, a pointer to the existing object is returned and the given data chunk is not copied. ha_storage_empty() Clears (empties) the storage from all data chunks that are stored in it. ha_storage_free() Destroys a storage object. Opposite to ha_storage_create(). Approved by: Marko
-
marko authored
rec_get_n_fields(), rec_offs_validate(), and rec_offs_make_valid().
-
marko authored
This was inadvertently reduced to 16384 bytes in r1861. For testing, this can be set as low as UNIV_PAGE_SIZE.
-
marko authored
row_merge(): Add the assertion ut_ad(half > 0). row_merge_sort(): Compute the half of the merge file correctly. The previous implementation used truncating division, which may result in loss of records when the file size in blocks is not a power of 2.
-
- 22 Sep, 2007 4 commits
-
-
vasil authored
Non-functional: put the code that clears the IS cache into a separate function.
-
vasil authored
Cosmetic: initialize the members of the cache in the same order as they are defined in the structure.
-
vasil authored
Make comment more clear (hopefully).
-
vasil authored
Use the newly introduced mem_alloc2() to use the memory that has been allocated in addition to the requested memory. This is done in order to avoid wasting memory. Do not calculate the sizes and offsets of the chunks in advance in table_cache_init() because it is unknown how much bytes will actually be allocated by mem_alloc2(). Rather calculate these on the run: after each chunk is allocated set its size and the offset of the next chunk. Similar patch approved by: Marko
-
- 21 Sep, 2007 7 commits
-
-
marko authored
merge buffer, write the next record to the beginning of the emptied buffer. This fixes one of the bugs mentioned in r1872.
-
marko authored
Some bug still remains, because innodb-index.test will lose some records from the clustered index after add primary key (a,b(255),c(255)) when row_merge_block_t is reduced to 8192 bytes. row_merge(): Add the parameter "half". Add some Valgrind instrumentation. Note that either stream can end before the other one. row_merge_sort(): Calculate "half" for row_merge().
-
marko authored
mem_alloc2(): New macro. This is a variant of mem_alloc() that returns the allocated size, which is equal to or greater than the requested size. mem_alloc_func(): Add the output parameter *size for the allocated size. When it is set, adjust the parameter passed to mem_heap_alloc(). rec_copy_prefix_to_buf_old(), rec_copy_prefix_to_buf(): Use mem_alloc2() instead of mem_alloc().
-
marko authored
row_merge_print_read and row_merge_print_write.
-
marko authored
was actually obtained from the buddy allocator. This should avoid some internal memory fragmentation in mem_heap_create() and mem_heap_alloc(). mem_area_alloc(): Change the in parameter size to an in/out parameter. Adjust the size based on what was obtained from pool->free_list[]. mem_heap_create_block(): Adjust block->len to what was obtained from mem_area_alloc().
-
marko authored
column d to two SELECT FROM t1.
-
marko authored
rec_print_comp(): New function, sliced from rec_print_new(). rec_print_old(), rec_print_comp(): Print the untruncated length of the column. row_merge_print_read, row_merge_print_write, row_merge_print_cmp: New flags, to enable debug printout in UNIV_DEBUG builds. row_merge_tuple_print(): New function for UNIV_DEBUG builds. row_merge_read_rec(): Obey row_merge_print_read. row_merge_buf_write(), row_merge_write_rec_low(), row_merge_write_eof(): Obey row_merge_print_write. row_merge_cmp(): Obey row_merge_print_cmp.
-
- 20 Sep, 2007 3 commits
-
-
marko authored
in fast index creation. row_merge_write_eof(), row_merge_buf_write(): When UNIV_DEBUG_VALGRIND is defined, fill the rest of the block (after the end-of-block marker) with 0xff.
-
vasil authored
innodb_lock_waits. See https://svn.innodb.com/innobase/InformationSchema/TransactionsAndLocks for design notes. Things that need to be resolved before this goes live: * MySQL must add thd_get_thread_id() function to their code http://bugs.mysql.com/30930 * Allocate memory from mem_heap instead of using mem_alloc() * Copy table name and index name into the cache because they may be freed later which will result in referencing freed memory Approved by: Marko
-
marko authored
-
- 19 Sep, 2007 6 commits
-
-
marko authored
of fast index creation.
-
marko authored
row_merge_read_rec(): Correct a typo in a comment. Fix error in arithmetics when the record spans two blocks. row_merge_write_rec_low(): Add a "size" parameter. Add debug assertions about extra_size and size. row_merge_write_rec(): After writing a record, properly advance the buffer pointer.
-
marko authored
all columns present in offsets. Add a const qualifier to the dict_index_t* parameter.
-
marko authored
Add const qualifiers.
-
marko authored
that row_merge_blocks() will have some work to do when row_merge_block_t is shrunk to 8192 bytes. Currently, this will cause a debug assertion failure, because row_merge_cmp() is considering all columns, not just the unique ones.
-
marko authored
Correctly handle node pointer records containing variable-length columns with two-byte length.
-