Commit 43502bb0 authored by monty@donna.mysql.com's avatar monty@donna.mysql.com

Merge work:/my/mysql into donna.mysql.com:/home/my/bk/mysql

parents caed453e 9a91d46f
......@@ -187,3 +187,4 @@ sql-bench/Results-linux/ATIS-mysql_bdb-Linux_2.2.14_my_SMP_i686
Docs/my_sys.doc
tmp/*
extra/resolve_stack_dump
sql/share/*.sys
jani@hynda.mysql.fi
monty@donna.mysql.com
......@@ -288,6 +288,11 @@ BSD/OS Notes
* BSDI3:: BSD/OS 3.x notes
* BSDI4:: BSD/OS 4.x notes
Mac OS X Notes
* Mac OS X Public Data::
* Mac OS X Server::
Windows Notes
* Windows installation:: Installing @strong{MySQL} on Windows
......@@ -610,6 +615,7 @@ MySQL Utilites
* mysql:: The command line tool
* mysqladmin:: Administering a @strong{MySQL} server
* mysqldump:: Dumping the structure and data from @strong{MySQL} databases and tables
* mysqlhotcopy:: Copying @code{MySQL} Databases and Tables
* mysqlimport:: Importing data from text files
* perror:: Displaying error messages
* mysqlshow:: Showing databases, tables and columns
......@@ -846,7 +852,7 @@ Changes in release 4.0.x (Development; Alpha)
Changes in release 3.23.x (Recommended; Gamma)
* News-3.23.31::
* News-3.23.31:: Changes in release 3.23.31
* News-3.23.30:: Changes in release 3.23.30
* News-3.23.29:: Changes in release 3.23.29
* News-3.23.28:: Changes in release 3.23.28
......@@ -2260,6 +2266,11 @@ FutureForum Web Discussion Software.
SupportWizard; Interactive helpdesk on the Web (This product includes a
licensed copy of @strong{MySQL}.)
@item @uref{http://www.sonork.com/}@*
Sonork, Instant Messenger that is not only Internet oriented. It's
focused on private networks and on small to medium companies. Client
is free, server is free for up to 5 seats.
@item @uref{http://www.stweb.org/}@*
StWeb - Stratos Web and Application server - An easy-to-use, cross
platform, Internet/Intranet development and deployment system for
......@@ -7965,12 +7976,30 @@ by HP's compilers. I did not change the flags.
@node Mac OS X, BEOS, HP-UX 11.x, Source install system issues
@subsection Mac OS X Notes
You can get @strong{MySQL} to work on Mac OS X by following the links to
the Mac OS X ports. @xref{Useful Links}.
@menu
* Mac OS X Public Data::
* Mac OS X Server::
@end menu
@node Mac OS X Public Data, Mac OS X Server, Mac OS X, Mac OS X
@subsubsection Mac OS X Public beta
@strong{MySQL} should work without any probelms on Mac OS X public beta.
(Darwin); You don't need the pthread patches for this os!
@node Mac OS X Server, , Mac OS X Public Data, Mac OS X
@subsubsection Mac OS X Server
Before trying to configure @strong{MySQL} on Mac OS X server you must
first first install the pthread package from
@uref{http://www.prnet.de/RegEx/mysql.html}. Note that this is not neeaded
@strong{MySQL} Version 3.23.7 should include all patches necessary to configure
it on Mac OS X. You must, however, first install the pthread package from
@uref{http://www.prnet.de/RegEx/mysql.html} before configuring @strong{MySQL}.
Our binary for Mac OS X is compiled on Rhapsody 5.5 with the following
configure line:
@example
CC=gcc CFLAGS="-O2 -fomit-frame-pointer" CXX=gcc CXXFLAGS="-O2 -fomit-frame-pointer" ./configure --prefix=/usr/local/mysql "--with-comment=Official MySQL binary" --with-extra-charsets=complex --disable-shared
@end example
You might want to also add aliases to your shell's resource file to
access @code{mysql} and @code{mysqladmin} from the command line:
......@@ -8196,8 +8225,8 @@ which protocol is used:
@end multitable
You can force a @strong{MySQL} client to use named pipes by specifying the
@code{--pipe} option. Use the @code{--socket} option to specify the name of
the pipe.
@code{--pipe} option or by specifying @code{.} as the host name.
Use the @code{--socket} option to specify the name of the pipe.
You can test whether or not @strong{MySQL} is working by executing the
following commands:
......@@ -9344,7 +9373,9 @@ Don't flush key buffers between writes for any @code{MyISAM} table.
Enable system locking.
@item -T, --exit-info
Print some debug info at exit.
This is a bit mask of different flags one can use for debugging the
mysqld server; One should not use this option if one doesn't know
exactly what it does!
@item --flush
Flush all changes to disk after each SQL command. Normally @strong{MySQL}
......@@ -18443,7 +18474,7 @@ BACKUP TABLE tbl_name[,tbl_name...] TO '/path/to/backup/directory'
Make a copy of all the table files to the backup directory that are the
minimum needed to restore it. Currenlty only works for @code{MyISAM}
tables. For @code{MyISAM} table, copies @code{.frm} (definition) and
@code{.MYD} (data) files. The index file can be rebuilt from those two.
@code{.MYD} (data) files. The index file can be rebuilt from those two.
During the backup, read lock will be held for each table, one at time,
as they are being backed up. If you want to backup several tables as
......@@ -20167,8 +20198,8 @@ The status variables listed above have the following meaning:
@multitable @columnfractions .35 .65
@item @strong{Variable} @tab @strong{Meaning}
@item @code{Aborted_clients} @tab Number of connections aborted because the client died without closing the connection properly.
@item @code{Aborted_connects} @tab Number of tries to connect to the @strong{MySQL} server that failed.
@item @code{Aborted_clients} @tab Number of connections aborted because the client died without closing the connection properly. @xref{Communication errors}.
@item @code{Aborted_connects} @tab Number of tries to connect to the @strong{MySQL} server that failed. @xref{Communication errors}.
@item @code{Bytes_received} @tab Number of bytes received from all clients.
@item @code{Bytes_sent} @tab Number of bytes sent to all clients.
@item @code{Connections} @tab Number of connection attempts to the @strong{MySQL} server.
......@@ -20244,12 +20275,6 @@ If @code{Handler_read_rnd} is big, then you probably have a lot of
queries that require @strong{MySQL} to scan whole tables or you have
joins that don't use keys properly.
@item
If @code{Created_tmp_tables} or @code{Sort_merge_passes} are high then
your @code{mysqld} @code{sort_buffer} variables is probably too small.
@item
@code{Created_tmp_files} doesn't count the files needed to handle temporary
tables.
@item
If @code{Threads_created} is big, you may want to increase the
@code{thread_cache_size} variable.
@end itemize
......@@ -20274,6 +20299,7 @@ differ somewhat:
| back_log | 50 |
| basedir | /my/monty/ |
| bdb_cache_size | 16777216 |
| bdb_log_buffer_size | 32768 |
| bdb_home | /my/monty/data/ |
| bdb_max_lock | 10000 |
| bdb_logdir | |
......@@ -20381,6 +20407,12 @@ The value of the @code{--basedir} option.
@item @code{bdb_cache_size}
The buffer that is allocated to cache index and rows for @code{BDB}
tables. If you don't use @code{BDB} tables, you should start
@code{mysqld} with @code{--skip-bdb} to not waste memory for this
cache.
@item @code{bdb_log_buffer_size}
The buffer that is allocated to cache index and rows for @code{BDB}
tables. If you don't use @code{BDB} tables, you should set this to 0 or
start @code{mysqld} with @code{--skip-bdb} to not waste memory for this
cache.
......@@ -20552,7 +20584,13 @@ will be incremented. If you are using @code{--log-slow-queries}, the query
will be logged to the slow query logfile. @xref{Slow query log}.
@item @code{lower_case_table_names}
Table names are stored in lowercase on disk.
Is 1 if table names are stored in lowercase on disk. On @strong{MySQL} on Unix
tables are always case-sensitive; Is this a big problem for you can
start @code{mysqld} with @code{-O lower_case_table_names=1}
In this case @strong{MySQL} will convert all table names to lower case on
storage and lookup. Not that you need to first convert your old table
names to lower case before starting @code{mysqld} with this option.
@item @code{max_allowed_packet}
The maximum size of one packet. The message buffer is initialized to
......@@ -21289,7 +21327,7 @@ When you use @code{LOCK TABLES}, you must lock all tables that you are
going to use and you must use the same alias that you are going to use
in your queries! If you are using a table multiple times in a query
(with aliases), you must get a lock for each alias! This policy ensures
that table locking is deadlock free andh makes the locking code smaller,
that table locking is deadlock free and makes the locking code smaller,
simpler and much faster.
Note that you should @strong{NOT} lock any tables that you are using with
......@@ -22678,10 +22716,10 @@ Berkeley DB (@uref{http://www.sleepycat.com}) has provided
crashes and also provides @code{COMMIT} and @code{ROLLBACK} on
transactions. In order to build MySQL Version 3.23.x (BDB support first
appeared in Version 3.23.15) with support for @code{BDB} tables, you
will need Berkeley DB Version 3.2.3d or newer which can be downloaded from
will need Berkeley DB Version 3.2.3g or newer which can be downloaded from
@uref{http://www.mysql.com/downloads/mysql-3.23.html}. This is a patched
version of Berkeley DB that is only available from @strong{MySQL}; the
standard Berkeley DB @strong{will not work with MySQL}.
standard Berkeley DB @strong{will not yet work with MySQL}.
@node BDB install, BDB start, BDB overview, BDB
@subsection Installing BDB
......@@ -22851,6 +22889,11 @@ TABLE}.
@itemize @bullet
@item
It's very slow to open many BDB tables at the same time. If you are
going to use BDB tables, you should not have a very big table cache (>
256 ?) and you should use @code{--no-auto-rehash} with the @code{mysql}
client. We plan to partly fix this in 4.0.
@item
@code{SHOW TABLE STATUS} doesn't yet provide that much information for BDB
tables.
@item
......@@ -28310,6 +28353,7 @@ How big a @code{VARCHAR} column can be
* mysql:: The command line tool
* mysqladmin:: Administering a @strong{MySQL} server
* mysqldump:: Dumping the structure and data from @strong{MySQL} databases and tables
* mysqlhotcopy:: Copying @strong{MySQL} Databases and Tables
* mysqlimport:: Importing data from text files
* perror:: Displaying error messages
* mysqlshow:: Showing databases, tables and columns
......@@ -29167,17 +29211,22 @@ If you do @code{myslqadmin shutdown} on a socket (in other words, on a
the computer where @code{mysqld} is running), @code{mysqladmin} will
wait until the @strong{MySQL} @code{pid-file} is removed to ensure that
the @code{mysqld} server has stopped properly.
@cindex dumping, databases
@cindex databases, dumping
@cindex tables, dumping
@cindex backing up, databases
@node mysqldump, mysqlimport, mysqladmin, Tools
@node mysqldump, mysqlhotcopy, mysqladmin, Tools
@section Dumping the Structure and Data from MySQL Databases and Tables
@cindex @code{mysqldump}
Utility to dump a database or a collection of database for backup or
for transferring the data to another SQL server. The dump will contain SQL
statements to create the table and/or populate the table:
Utility to dump a database or a collection of database for backup or for
transferring the data to another SQL server (not necessarily a MySQL
server). The dump will contain SQL statements to create the table
and/or populate the table.
If you are doing a backup on the server, you should consider using
the @code{mysqlhotcopy} instead. @xref{mysqlhotcopy}.
@example
shell> mysqldump [OPTIONS] database [tables]
......@@ -29350,12 +29399,79 @@ If all the databases are wanted, one can use:
mysqldump --all-databases > all_databases.sql
@end example
@cindex dumping, databases
@cindex databases, dumping
@cindex tables, dumping
@cindex backing up, databases
@node mysqlhotcopy, mysqlimport, mysqldump, Tools
@section Copying MySQL Databases and Tables
@code{mysqlhotcopy} is a perl script that uses @code{LOCK TABLES},
@code{FLUSH TABLES} and @code{cp} or @code{scp} to quickly make a backup
of a database. It's the fastest way to make a backup of the database,
but it can only be run on the same machine where the database directories
are.
@example
mysqlhotcopy db_name [/path/to/new_directory]
mysqlhotcopy db_name_1 ... db_name_n /path/to/new_directory
mysqlhotcopy db_name./regex/
@end example
@code{mysqlhotcopy} supports the following options:
@table @code
@item -?, --help
Display a helpscreen and exit
@item -u, --user=#
User for database login
@item -p, --password=#
Password to use when connecting to server
@item -P, --port=#
Port to use when connecting to local server
@item -S, --socket=#
Socket to use when connecting to local server
@item --allowold
Don't abort if target already exists (rename it _old)
@item --keepold
Don't delete previous (now renamed) target when done
@item --noindices
Don't include full index files in copy to make the backup smaller and faster
The indexes can later be reconstructed with @code{myisamchk -rq.}.
@item --method=#
Method for copy (@code{cp} or @code{scp}).
@item -q, --quiet
Be silent except for errors
@item --debug
Enable debug
@item -n, --dryrun
Report actions without doing them
@item --regexp=#
Copy all databases with names matching regexp
@item --suffix=#
Suffix for names of copied databases
@item --checkpoint=#
Insert checkpoint entry into specified db.table
@item --flushlog
Flush logs once all tables are locked.
@item --tmpdir=#
Temporary directory (instead of /tmp).
@end table
You can use 'perldoc mysqlhotcopy' to get a more complete documentation for
@code{mysqlhotcopy}.
@code{mysqlhotcopy} reads the group @code{[mysqlhotcopy]} from the option
files.
@cindex importing, data
@cindex data, importing
@cindex files, text
@cindex text files, importing
@cindex @code{mysqlimport}
@node mysqlimport, perror, mysqldump, Tools
@node mysqlimport, perror, mysqlhotcopy, Tools
@section Importing Data from Text Files
@code{mysqlimport} provides a command-line interface to the @code{LOAD DATA
......@@ -32882,22 +32998,49 @@ expecting to store the full length of a @code{BLOB} into a table, you'll need
to start the server with the @code{--set-variable=max_allowed_packet=16M}
option.
@cindex aborted clients
@cindex aborted connection
@cindex connection, aborted
@node Communication errors, Full table, Packet too large, Common errors
@subsection Communication Errors / Aborted Connection
If you find the error @code{Aborted connection} in the @code{hostname.err}
log file, this could be because of one of the following reasons:
The server variable @code{Aborted_clients} is incremented when:
@itemize @bullet
@item
The client had been sleeping more than @code{wait_timeout} without doing
any requests. @xref{SHOW VARIABLES}.
The client program did not call @code{mysql_close()} before exit.
@item
The client had been sleeping more than @code{wait_timeout} or
@code{interactive_timeout} without doing any requests. @xref{SHOW
VARIABLES}.
@item
The client program ended abruptly in the middle of the transfer.
@end itemize
When the above happens, the mysqld will write a note about an
@code{Aborted connection} in the @code{hostname.err}
The server variable @code{Aborted_connects} is incremented when:
@itemize @bullet
@item
The client program did not call @code{mysql_close()} before exit.
When a connection packet doesn't contain the right information.
@item
When the user didn't have privileges to connect to a database.
@item
When a user uses a wrong password.
@item
When it takes more than @code{connect_timeout} seconds to get
a connect package.
@end itemize
Note that the above could indicate that someone is trying to break into
your database!
@xref{SHOW VARIABLES}.
Other reason for problems with Aborted clients / Aborted connections.
@itemize @bullet
@item
Usage of duplex Ethernet protocol, both half and full with
Linux. Many Linux Ethernet drivers have this bug. You should test
......@@ -32915,6 +33058,7 @@ Faulty Ethernets or hubs or switches, cables ... This can be diagnosed
properly only by replacing hardware.
@end itemize
@cindex table is full
@node Full table, Cannot create, Communication errors, Common errors
@subsection @code{The table is full} Error
......@@ -33793,13 +33937,19 @@ mirror if needed. @code{LAST_INSERT_ID()} is also safe to use.
Because @strong{MySQL} tables are stored as files, it is easy to do a
backup. To get a consistent backup, do a @code{LOCK TABLES} on the
relevant tables. @xref{LOCK TABLES, , @code{LOCK TABLES}}. You only need a
read lock; this allows other threads to continue to query the tables while
you are making a copy of the files in the database directory. If you want to
make a SQL level backup of a table, you can use @code{SELECT INTO OUTFILE}.
relevant tables followed by @code{FLUSH TABLES} for the tables.
@xref{LOCK TABLES, , @code{LOCK TABLES}}.
@xref{FLUSH, , @code{FLUSH}}.
You only need a read lock; this allows other threads to continue to
query the tables while you are making a copy of the files in the
database directory.
Another way to back up a database is to use the @code{mysqldump} program:
@xref{mysqldump}.
If you want to make a SQL level backup of a table, you can use
@code{SELECT INTO OUTFILE} or @code{BACKUP
TABLE}. @xref{SELECT}. @xref{BACKUP TABLE}.
Another way to back up a database is to use the @code{mysqldump} program or
the @code{mysqlhotcopy script}. @xref{mysqldump}. @xref{mysqlhotcopy}.
@enumerate
@item
......@@ -33807,6 +33957,10 @@ Do a full backup of your databases:
@example
shell> mysqldump --tab=/path/to/some/dir --opt --full
or
shell> mysqlhotcopy database /path/to/some/dir
@end example
You can also simply copy all table files (@file{*.frm}, @file{*.MYD}, and
......@@ -33823,16 +33977,23 @@ you executed @code{mysqldump}.
@end enumerate
If you have to restore something, try to recover your tables using
@code{myisamchk -r} first. That should work in 99.9% of all cases. If
@code{myisamchk} fails, try the following procedure:
(This will only work if you have started @strong{MySQL} with
@code{REPAIR TABLE} or @code{myisamchk -r} first. That should work in
99.9% of all cases. If @code{myisamchk} fails, try the following
procedure: (This will only work if you have started @strong{MySQL} with
@code{--log-update}. @xref{Update log}.):
@enumerate
@item
Restore the original @code{mysqldump} backup.
@item
Execute the following command to re-run the updates in the update logs:
Execute the following command to re-run the updates in the binary log:
@example
shell> mysqlbinlog hostname-bin.[0-9]* | mysql
@end example
If you are using the update log you can use:
@example
shell> ls -1 -t -r hostname.[0-9]* | xargs cat | mysql
@end example
......@@ -40045,6 +40206,7 @@ developed a @strong{MySQL} feature themself or by giving us hardware for
@multitable @columnfractions .3 .7
@item Va Linux / Andover.net @tab Replication
@item NuSphere @tab Editing of the @strong{MySQL} manual.
@item Stork Design studio @tab The MySQL web site in use between 1998-2000
@item Intel @tab Contributed to development on Windows and Linux platforms
@item Compaq @tab Contributed to Development on Linux-alpha
@item SWSoft @tab Development on the embedded @code{mysqld} version.
......@@ -40143,6 +40305,10 @@ though, so Version 3.23 is not released as a stable version yet.
@appendixsubsec Changes in release 3.23.31
@itemize @bullet
@item
Fixed bug when using expression of type
@code{SELECT ... FROM t1 left join t2 on (t1.a=t2.a) WHERE t1.a=t2.a}. In this
case the test in the @code{WHERE} clause was wrongly optimized away.
@item
Fixed bug in @code{MyISAM} when deleting keys with possible @code{NULL}
values, but the first key-column was not a prefix-compressed text column.
@item
......@@ -40159,6 +40325,8 @@ Added @code{Threads_created} status variable to @code{mysqld}.
@appendixsubsec Changes in release 3.23.30
@itemize @bullet
@item
Fixed that @code{myisamdump} works against old mysqld servers.
@item
Fixed that @code{myisamchk -k#} works again.
@item
Fixed a problem with replication when the binary log file went over 2G
......@@ -44720,6 +44888,13 @@ will probably be ignored).
Doing a @code{LOCK TABLE ..} and @code{FLUSH TABLES ..} doesn't
guarantee that there isn't a half-finished transaction in progress on the
table.
@item
BDB tables are a bit slow to open from this. If you have many BDB tables
in a database, it will take a long time to use the @code{mysql} client
on the database if you are not using the @code{-A} option or if you are
using @code{rehash}. This is especially notable when you have a big table
cache.
@end itemize
The following problems are known and will be fixed in due time:
......@@ -44974,6 +45149,10 @@ Secure connections (with SSL).
Extend the optimizer to be able to optimize some @code{ORDER BY key_name DESC}
queries.
@item
@code{SHOW COLUMNS FROM table_name} (used by @code{mysql} client to allow
expansions of column names) should not open the table, but only the
definition file. This will require less memory and be much faster.
@item
New key cache
@end itemize
......@@ -45565,20 +45744,20 @@ Stop the mysqld daemon (with @code{mysqladmin shutdown})
Check all tables with @code{myisamchk -s database/*.MYI}. Repair any
wrong tables with @code{myisamchk -r database/table.MYI}.
@item
Start @code{mysqld} with @code{--log-update}. @xref{Update log}.
Start @code{mysqld} with @code{--log-binary}. @xref{Binary log}.
@item
When you have gotten a crashed table, stop the @code{mysqld server}.
@item
Restore the backup.
@item
Restart the @code{mysqld} server @strong{without} @code{--log-update}
Restart the @code{mysqld} server @strong{without} @code{--log-binary}
@item
Re-execute the commands with @code{mysql < update-log}. The update log
is saved in the @strong{MySQL} database directory with the name
@code{your-hostname.#}.
Re-execute the commands with @code{mysqlbinlog update-log-file | mysql}.
The update log is saved in the @strong{MySQL} database directory with
the name @code{hostname-bin.#}.
@item
If the tables are corrupted again, you have found reproducible bug
in the @code{ISAM} code! FTP the tables and the update log to
in the @code{MyISAM} code! FTP the tables and the update log to
@uref{ftp://support.mysql.com/pub/mysql/secret} and we will fix this as soon as
possible!
@end itemize
......@@ -39,7 +39,7 @@
#include "my_readline.h"
#include <signal.h>
const char *VER="11.10";
const char *VER="11.11";
gptr sql_alloc(unsigned size); // Don't use mysqld alloc for these
void sql_element_free(void *ptr);
......@@ -1192,7 +1192,8 @@ You can turn off this feature to get a quicker startup with -A\n\n");
field_names=0;
/* hash all field names, both with the table prefix and without it */
if (!tables) { /* no tables */
if (!tables) /* no tables */
{
DBUG_VOID_RETURN;
}
mysql_data_seek(tables,0);
......@@ -1201,7 +1202,6 @@ You can turn off this feature to get a quicker startup with -A\n\n");
MYF(MY_WME));
if (!field_names)
DBUG_VOID_RETURN;
field_names[mysql_num_rows(tables)]='\0';
i=0;
while ((table_row=mysql_fetch_row(tables)))
{
......@@ -1229,10 +1229,14 @@ You can turn off this feature to get a quicker startup with -A\n\n");
}
}
else
{
tee_fprintf(stdout,
"Didn't find any fields in table '%s'\n",table_row[0]);
field_names[i]=0;
}
i++;
}
field_names[i]=0; // End pointer
DBUG_VOID_RETURN;
}
......@@ -2018,11 +2022,11 @@ com_use(String *buffer __attribute__((unused)), char *line)
if (mysql_select_db(&mysql,tmp))
return put_info(mysql_error(&mysql),INFO_ERROR,mysql_errno(&mysql));
}
my_free(current_db,MYF(MY_ALLOW_ZERO_PTR));
current_db=my_strdup(tmp,MYF(MY_WME));
#ifdef HAVE_READLINE
build_completion_hash(no_rehash,1);
#endif
my_free(current_db,MYF(MY_ALLOW_ZERO_PTR));
current_db=my_strdup(tmp,MYF(MY_WME));
}
}
else
......
......@@ -1241,7 +1241,7 @@ MYSQL_HAVE_FIONREAD
MYSQL_HAVE_TIOCSTAT
MYSQL_STRUCT_DIRENT_D_INO
MYSQL_TYPE_SIGHANDLER
if test $with_named_curses = "no"
if test "$with_named_curses" = "no"
then
MYSQL_CHECK_LIB_TERMCAP
else
......
......@@ -196,3 +196,7 @@ Changes done to this distrubtion (pthreads-1_60_beta6) by Monty (monty@tcx.se)
00.10.18 by Monty (monty@mysql.com)
- Added patch by Dave Huang <khym@bga.com> to fix problem with date/time
on NETBSD/Alpha.
01.01.11 by Monty (monty@mysql.com)
- Added patch by Allen Briggs <briggs@ninthwonder.com> for
Apple PowerMac 8500 w/ G3 upgrade running NetBSD/macppc
......@@ -295,7 +295,8 @@ EOF
echo ${UNAME_MACHINE}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`
exit 0 ;;
*:NetBSD:*:*)
echo ${UNAME_MACHINE}-unknown-netbsd`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'`
UNAME_PROCESSOR=`uname -p 2>/dev/null` || UNAME_PROCESSOR=$UNAME_MACHINE
echo ${UNAME_PROCESSOR}-unknown-netbsd`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'`
exit 0 ;;
*:OpenBSD:*:*)
echo ${UNAME_MACHINE}-unknown-openbsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`
......
......@@ -1298,6 +1298,12 @@ case $host in
# hpux-9.03.mk seems to be missing; what should this be?
except="fork"
;;
powerpc-*-netbsd1.*)
name=powerpc-netbsd
sysincludes=netbsd-1.1
except="fork lseek ftruncate pipe fstat"
available_syscalls="sigprocmask sigaction sigsuspend"
;;
sparc-*-sunos4.1.3* | sparc-*-sunos4.1.4*)
name=sparc-sunos-4.1.3
sysincludes=sunos-4.1.3
......
......@@ -175,6 +175,12 @@ changequote([,])dnl
# hpux-9.03.mk seems to be missing; what should this be?
except="fork"
;;
powerpc-*-netbsd1.*)
name=powerpc-netbsd
sysincludes=netbsd-1.1
except="fork lseek ftruncate pipe fstat"
available_syscalls="sigprocmask sigaction sigsuspend"
;;
sparc-*-sunos4.1.3* | sparc-*-sunos4.1.4*)
name=sparc-sunos-4.1.3
sysincludes=sunos-4.1.3
......
/* ==== machdep.c ============================================================
* Copyright (c) 1993, 1994 Chris Provenzano, proven@athena.mit.edu
*
* Description : Machine dependent functions for NetBSD/PowerPC (1.5+)
*
* 1.00 93/08/04 proven
* -Started coding this file.
*
* 2001/01/10 briggs
* -Modified to make it go with NetBSD/PowerPC
*/
#ifndef lint
static const char rcsid[] = "engine-alpha-osf1.c,v 1.4.4.1 1995/12/13 05:41:37 proven Exp";
#endif
#include <pthread.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/syscall.h>
#include <stdlib.h>
#include <fcntl.h>
#include <stdio.h>
/* ==========================================================================
* machdep_pthread_start()
*/
void machdep_pthread_start(void)
{
context_switch_done();
pthread_sched_resume ();
/* XXXMLG
* This is EXTREMELY bogus, but it seems that this function is called
* with the pthread kernel locked. If this happens, __errno() will
* return the wrong address until after the first context switch.
*
* Clearly there is a leak of pthread_kernel somewhere, but until
* it is found, we force a context switch here, just before calling
* the thread start routine. When we return from pthread_yield
* the kernel will be unlocked.
*/
pthread_yield();
/* Run current threads start routine with argument */
pthread_exit(pthread_run->machdep_data.start_routine
(pthread_run->machdep_data.start_argument));
/* should never reach here */
PANIC();
}
/* ==========================================================================
* __machdep_pthread_create()
*/
void __machdep_pthread_create(struct machdep_pthread *machdep_pthread,
void *(* start_routine)(void *), void *start_argument,
long stack_size, long nsec, long flags)
{
machdep_pthread->start_routine = start_routine;
machdep_pthread->start_argument = start_argument;
machdep_pthread->machdep_timer.it_value.tv_sec = 0;
machdep_pthread->machdep_timer.it_interval.tv_sec = 0;
machdep_pthread->machdep_timer.it_interval.tv_usec = 0;
machdep_pthread->machdep_timer.it_value.tv_usec = nsec / 1000;
/* Set up new stack frame so that it looks like it returned from a
longjmp() to the beginning of machdep_pthread_start(). */
/* state is sigmask, then r8-r31 where r11 is the LR
* So, istate[3] is r10, which is the SP
* So, istate[4] is r11, which is the LR
* So, istate[5] is r12, which is the CR
*/
machdep_pthread->machdep_istate[4] = (long)machdep_pthread_start;
machdep_pthread->machdep_istate[5] = 0;
/* PowerPC stack starts high and builds down, and needs to be 16-byte
aligned. */
machdep_pthread->machdep_istate[3] =
((long) machdep_pthread->machdep_stack + stack_size) & ~0xf;
}
/* ==========================================================================
* machdep_save_state()
*/
int machdep_save_state(void)
{
return( _setjmp(pthread_run->machdep_data.machdep_istate) );
}
void machdep_restore_state(void)
{
_longjmp(pthread_run->machdep_data.machdep_istate, 1);
}
void machdep_save_float_state (struct pthread *pthread)
{
__machdep_save_fp_state(pthread->machdep_data.machdep_fstate);
}
void machdep_restore_float_state (void)
{
__machdep_restore_fp_state(pthread_run->machdep_data.machdep_fstate);
}
/* ==========================================================================
* machdep_set_thread_timer()
*/
void machdep_set_thread_timer(struct machdep_pthread *machdep_pthread)
{
if (setitimer(ITIMER_VIRTUAL, &(machdep_pthread->machdep_timer), NULL)) {
PANIC();
}
}
/* ==========================================================================
* machdep_unset_thread_timer()
*/
void machdep_unset_thread_timer(struct machdep_pthread *machdep_pthread)
{
struct itimerval zeroval = { { 0, 0 }, { 0, 0} };
if (setitimer(ITIMER_VIRTUAL, &zeroval, NULL)) {
PANIC();
}
}
/* ==========================================================================
* machdep_pthread_cleanup()
*/
void *machdep_pthread_cleanup(struct machdep_pthread *machdep_pthread)
{
return(machdep_pthread->machdep_stack);
}
void *machdep_pthread_cleanup(struct machdep_pthread *machdep_pthread);
void machdep_pthread_start(void);
/* ==========================================================================
* __machdep_stack_free()
*/
void
__machdep_stack_free(void * stack)
{
free(stack);
}
/* ==========================================================================
* __machdep_stack_alloc()
*/
void *
__machdep_stack_alloc(size_t size)
{
return(malloc(size));
}
/* ==========================================================================
* machdep_sys_creat()
*/
int
machdep_sys_creat(char * path, int mode)
{
return(machdep_sys_open(path, O_WRONLY | O_CREAT | O_TRUNC, mode));
}
/* ==========================================================================
* machdep_sys_wait3()
*/
int
machdep_sys_wait3(int * b, int c, int *d)
{
return(machdep_sys_wait4(0, b, c, d));
}
/* ==========================================================================
* machdep_sys_waitpid()
*/
int
machdep_sys_waitpid(int a, int * b, int c)
{
return(machdep_sys_wait4(a, b, c, NULL));
}
/* ==========================================================================
* machdep_sys_getdtablesize()
*/
int
machdep_sys_getdtablesize(void)
{
return(sysconf(_SC_OPEN_MAX));
}
/* ==========================================================================
* machdep_sys_lseek()
*/
off_t
machdep_sys_lseek(int fd, off_t offset, int whence)
{
return(__syscall((quad_t)SYS_lseek, fd, 0, offset, whence));
}
int
machdep_sys_ftruncate( int fd, off_t length)
{
quad_t q;
int rv;
q = __syscall((quad_t)SYS_ftruncate, fd,0, length);
if( /* LINTED constant */ sizeof( quad_t ) == sizeof( register_t ) ||
/* LINTED constant */ BYTE_ORDER == LITTLE_ENDIAN )
rv = (int)q;
else
rv = (int)((u_quad_t)q >> 32);
return rv;
}
/* ==========================================================================
* machdep_sys_getdirentries()
*/
int
machdep_sys_getdirentries(int fd, char * buf, int len, int * seek)
{
return(machdep_sys_getdents(fd, buf, len));
}
/* ==== machdep.h ============================================================
* Copyright (c) 1994 Chris Provenzano (proven@athena.mit.edu) and
* Ken Raeburn (raeburn@mit.edu).
*
* engine-alpha-osf1.h,v 1.4.4.1 1995/12/13 05:41:42 proven Exp
*
*/
#include <unistd.h>
#include <setjmp.h>
#include <sys/time.h>
#include <sys/cdefs.h>
#include <sys/signal.h> /* for _NSIG */
/*
* The first machine dependent functions are the SEMAPHORES
* needing the test and set instruction.
*/
#define SEMAPHORE_CLEAR 0
#define SEMAPHORE_SET 0xffff
#define SEMAPHORE_TEST_AND_SET(lock) \
({ \
volatile long t1, temp = SEMAPHORE_SET; \
__asm__ volatile( \
"1: lwarx %0,0,%1; \
cmpwi %0, 0; \
bne 2f; \
stwcx. %2,0,%1; \
bne- 1b; \
2: " \
:"=r" (t1) \
:"m" (lock), "r" (temp)); \
t1; \
})
#define SEMAPHORE_RESET(lock) *lock = SEMAPHORE_CLEAR
/*
* New types
*/
typedef int semaphore;
/*
* sigset_t macros
*/
#define SIG_ANY(sig) (sig)
#define SIGMAX (_NSIG-1)
/*
* New Strutures
*/
struct machdep_pthread {
void *(*start_routine)(void *);
void *start_argument;
void *machdep_stack;
struct itimerval machdep_timer;
jmp_buf machdep_istate;
unsigned long machdep_fstate[66];
/* 64-bit fp regs 0-31 + fpscr */
/* We pretend the fpscr is 64 bits */
};
/*
* Static machdep_pthread initialization values.
* For initial thread only.
*/
#define MACHDEP_PTHREAD_INIT \
{ NULL, NULL, NULL, { { 0, 0 }, { 0, 100000 } }, { 0 }, { 0 } }
/*
* Minimum stack size
*/
#define PTHREAD_STACK_MIN 2048
/*
* Some fd flag defines that are necessary to distinguish between posix
* behavior and bsd4.3 behavior.
*/
#define __FD_NONBLOCK O_NONBLOCK
/*
* New functions
*/
__BEGIN_DECLS
#if defined(PTHREAD_KERNEL)
#define __machdep_stack_get(x) (x)->machdep_stack
#define __machdep_stack_set(x, y) (x)->machdep_stack = y
#define __machdep_stack_repl(x, y) \
{ \
if ((stack = __machdep_stack_get(x))) { \
__machdep_stack_free(stack); \
} \
__machdep_stack_set(x, y); \
}
int machdep_save_state(void);
void __machdep_save_fp_state(unsigned long *);
void __machdep_restore_fp_state(unsigned long *);
void *__machdep_stack_alloc(size_t);
void __machdep_stack_free(void *);
#endif
__END_DECLS
#include <machine/asm.h>
#define COMPAT_43
#include <sys/syscall.h>
#ifndef __CONCAT
#include <sys/cdefs.h>
#endif
#define CONCAT __CONCAT
#undef SYSCALL
/* Kernel syscall interface:
Input:
0 - system call number
3-8 - arguments, as in C
Output:
so - (summary overflow) clear iff successful
This macro is similar to SYSCALL in asm.h, but not completely.
There's room for optimization, if we assume this will continue to
be assembled as one file.
This macro expansions does not include the return instruction.
If there's no other work to be done, use something like:
SYSCALL(foo) ; ret
If there is other work to do (in fork, maybe?), do it after the
SYSCALL invocation. */
ENTRY(machdep_cerror)
mflr 0 # Save LR in 0
stwu 1,-16(1) # allocate new stack frame
stw 0,20(1) # Stash 0 in stack
stw 31,8(1) # Stash 31 in stack (since it's callee-saved
mr 31,3 # and we stash return there)
bl PIC_PLT(_C_LABEL(__errno))
stw 31,0(3) # *errno() = err
lwz 0,20(1) # Restore LR from stack to 0
neg 3,31 # return -errno to 3
lwz 31,8(1) # Restore 31 from stack
mtlr 0
la 1,16(1) # Restore stack frame
li 4,-1 # Put -1 in r4 for those syscalls that return
blr # two values
/* The fork system call is special... */
ENTRY(machdep_sys_fork)
li 0, SYS_fork
sc
bso PIC_PLT(_C_LABEL(machdep_cerror))
addi 4,4,-1
blr
/* The pipe system call is special... */
ENTRY(machdep_sys_pipe)
mr 5,3
li 0,SYS_pipe
sc
bso PIC_PLT(_C_LABEL(machdep_cerror))
stw 3,0(5) # Success, store fds
stw 4,4(5)
li 3,0
blr # And return 0
#ifndef SYS___sigsuspend14
/* The sigsuspend system call is special... */
ENTRY(machdep_sys_sigsuspend)
lwz 3,0(3)
li 0,SYS_compat_13_sigsuspend13
sc
b PIC_PLT(_C_LABEL(machdep_cerror))
#endif /* SYS_sigsuspend14 */
#ifndef SYS___sigprocmask14
/* The sigprocmask system call is special... */
ENTRY(machdep_sys_sigprocmask)
or. 4,4,4 # Set == NULL ?
li 6,1 # how = SIG_BLOCK
beq Ldoit
lwz 4,0(4) # if not, replace it in r4 with #set
mr 6,3
Ldoit: mr 3,6 # ... using sigprocmask(SIG_BLOCK)
li 0,SYS_compat_13_sigprocmask13
sc
bso PIC_PLT(_C_LABEL(machdep_cerror))
or. 5,5,5 # Check to see if oset requested
beq Ldone # if oset != NULL
stw 3,0(5) # *oset = oldmask
Ldone:
li 3,0 # return 0
blr
#endif /* SYS_sigprocmask14 */
/* More stuff ... */
/* For fstat() we actually syscall fstat13. */
ENTRY(machdep_sys_fstat)
li 0, SYS___fstat13
sc
bnslr
b PIC_PLT(_C_LABEL(machdep_cerror))
/* Do we need to save the entire floating point state? I think so... */
ENTRY(__machdep_save_fp_state)
stwu 1,-8(1)
stw 3,4(1)
stfd 0,0(3)
stfdu 1,8(3)
stfdu 2,8(3)
stfdu 3,8(3)
stfdu 4,8(3)
stfdu 5,8(3)
stfdu 6,8(3)
stfdu 7,8(3)
stfdu 8,8(3)
stfdu 9,8(3)
stfdu 10,8(3)
stfdu 11,8(3)
stfdu 12,8(3)
stfdu 13,8(3)
stfdu 14,8(3)
stfdu 15,8(3)
stfdu 16,8(3)
stfdu 17,8(3)
stfdu 18,8(3)
stfdu 19,8(3)
stfdu 20,8(3)
stfdu 21,8(3)
stfdu 22,8(3)
stfdu 23,8(3)
stfdu 24,8(3)
stfdu 25,8(3)
stfdu 26,8(3)
stfdu 27,8(3)
stfdu 28,8(3)
stfdu 29,8(3)
stfdu 30,8(3)
stfdu 31,8(3)
mffs 0
stfdu 0,8(3)
lwz 3,4(1)
lwz 1,0(1)
blr
ENTRY(__machdep_restore_fp_state)
stwu 1,-12(1)
stw 3,4(1)
stw 4,8(1)
mr 4,3
lfdu 1,8(3)
lfdu 2,8(3)
lfdu 3,8(3)
lfdu 4,8(3)
lfdu 5,8(3)
lfdu 6,8(3)
lfdu 7,8(3)
lfdu 8,8(3)
lfdu 9,8(3)
lfdu 10,8(3)
lfdu 11,8(3)
lfdu 12,8(3)
lfdu 13,8(3)
lfdu 14,8(3)
lfdu 15,8(3)
lfdu 16,8(3)
lfdu 17,8(3)
lfdu 18,8(3)
lfdu 19,8(3)
lfdu 20,8(3)
lfdu 21,8(3)
lfdu 22,8(3)
lfdu 23,8(3)
lfdu 24,8(3)
lfdu 25,8(3)
lfdu 26,8(3)
lfdu 27,8(3)
lfdu 28,8(3)
lfdu 29,8(3)
lfdu 30,8(3)
lfdu 31,8(3)
lfdu 0,8(3)
mtfsf 127,0
lfd 0,0(4)
lwz 3,4(1)
lwz 4,8(1)
lwz 1,0(1)
blr
#include <machine/asm.h>
#define COMPAT_43
#include <sys/syscall.h>
#ifdef SYS___sigsuspend14
#define SYS_sigsuspend SYS___sigsuspend14
#endif
#ifdef SYS___sigaction14
#define SYS_sigaction SYS___sigaction14
#endif
#ifdef SYS___sigprocmask14
#define SYS_sigprocmask SYS___sigprocmask14
#endif
#undef SYSCALL
/* Kernel syscall interface:
Input:
0 - system call number
3-8 - arguments, as in C
Output:
so - (summary overflow) clear iff successful
This macro is similar to SYSCALL in asm.h, but not completely.
There's room for optimization, if we assume this will continue to
be assembled as one file.
This macro expansions does not include the return instruction.
If there's no other work to be done, use something like:
SYSCALL(foo) ; ret
If there is other work to do (in fork, maybe?), do it after the
SYSCALL invocation. */
#define SYSCALL(x) \
ENTRY(machdep_sys_ ## x) \
li 0, SYS_ ## x ; \
sc ; \
bnslr ; \
b PIC_PLT(_C_LABEL(machdep_cerror))
#define XSYSCALL(x) SYSCALL(x) ; blr
XSYSCALL(SYSCALL_NAME)
......@@ -35,6 +35,14 @@ grp a c id a c d
1 1 a 1 1 a 1
2 2 b NULL NULL NULL NULL
2 3 c NULL NULL NULL NULL
3 4 E 3 4 A 4
3 5 C 3 5 B 5
3 6 D 3 6 C 6
NULL NULL NULL NULL NULL NULL
grp a c id a c d
1 1 a 1 1 a 1
2 2 b NULL NULL NULL NULL
2 3 c NULL NULL NULL NULL
3 4 E NULL NULL NULL NULL
3 5 C NULL NULL NULL NULL
3 6 D NULL NULL NULL NULL
......
......@@ -18,6 +18,7 @@ select t1.*,t2.* from t1 left join t2 on (t1.a=t2.a) order by t1.grp,t1.a,t2.c;
select t1.*,t2.* from { oj t2 left outer join t1 on (t1.a=t2.a) };
select t1.*,t2.* from t1 as t0,{ oj t2 left outer join t1 on (t1.a=t2.a) } WHERE t0.a=2;
select t1.*,t2.* from t1 left join t2 using (a);
select t1.*,t2.* from t1 left join t2 using (a) where t1.a=t2.a;
select t1.*,t2.* from t1 left join t2 using (a,c);
select t1.*,t2.* from t1 left join t2 using (c);
select t1.*,t2.* from t1 natural left outer join t2;
......
#!@PERL@
#!@PERL@ -w
use strict;
use Getopt::Long;
......@@ -36,8 +36,9 @@ WARNING: THIS IS VERY MUCH A FIRST-CUT ALPHA. Comments/patches welcome.
# Documentation continued at end of file
my $VERSION = "1.9";
my $opt_tmpdir= $main::ENV{TMPDIR};
my $VERSION = "1.10";
my $opt_tmpdir = $ENV{TMPDIR} || "/tmp";
my $OPTIONS = <<"_OPTIONS";
......@@ -74,7 +75,7 @@ sub usage {
}
my %opt = (
user => getpwuid($>),
user => scalar getpwuid($>),
noindices => 0,
allowold => 0, # for safety
keepold => 0,
......@@ -139,7 +140,7 @@ else {
my %mysqld_vars;
my $start_time = time;
my $opt_tmpdir= $opt{tmpdir} ? $opt{tmpdir} : $main::ENV{TMPDIR};
$opt_tmpdir= $opt{tmpdir} if $opt{tmpdir};
$0 = $1 if $0 =~ m:/([^/]+)$:;
$opt{quiet} = 0 if $opt{debug};
$opt{allowold} = 1 if $opt{keepold};
......@@ -235,16 +236,17 @@ foreach my $rdb ( @db_desc ) {
or die "Cannot open dir '$db_dir': $!";
my %db_files;
map { ( /(.+)\.\w+$/ ? { $db_files{$_} = $1 } : () ) } readdir(DBDIR);
map { ( /(.+)\.\w+$/ ? ( $db_files{$_} = $1 ) : () ) } readdir(DBDIR);
unless( keys %db_files ) {
warn "'$db' is an empty database\n";
}
closedir( DBDIR );
## filter (out) files specified in t_regex
my @db_files = sort ( $negated
my @db_files = ( $negated
? grep { $db_files{$_} !~ $t_regex } keys %db_files
: grep { $db_files{$_} =~ $t_regex } keys %db_files );
@db_files = sort @db_files;
my @index_files=();
## remove indices unless we're told to keep them
......@@ -776,3 +778,5 @@ Scott Wiersdorf - added table regex and scp support
Monty - working --noindex (copy only first 2048 bytes of index file)
Fixes for --method=scp
Ask Bjoern Hansen - Cleanup code to fix a few bugs and enable -w again.
......@@ -78,7 +78,7 @@ const char *ha_berkeley_ext=".db";
bool berkeley_skip=0,berkeley_shared_data=0;
u_int32_t berkeley_init_flags= DB_PRIVATE | DB_RECOVER, berkeley_env_flags=0,
berkeley_lock_type=DB_LOCK_DEFAULT;
ulong berkeley_cache_size;
ulong berkeley_cache_size, berkeley_log_buffer_size, berkeley_log_file_size=0;
char *berkeley_home, *berkeley_tmpdir, *berkeley_logdir;
long berkeley_lock_scan_time=0;
ulong berkeley_trans_retry=1;
......@@ -99,7 +99,8 @@ static void berkeley_print_error(const char *db_errpfx, char *buffer);
static byte* bdb_get_key(BDB_SHARE *share,uint *length,
my_bool not_used __attribute__((unused)));
static BDB_SHARE *get_share(const char *table_name, TABLE *table);
static int free_share(BDB_SHARE *share, TABLE *table, uint hidden_primary_key);
static int free_share(BDB_SHARE *share, TABLE *table, uint hidden_primary_key,
bool mutex_is_locked);
static int write_status(DB *status_block, char *buff, uint length);
static void update_status(BDB_SHARE *share, TABLE *table);
static void berkeley_noticecall(DB_ENV *db_env, db_notices notice);
......@@ -118,6 +119,23 @@ bool berkeley_init(void)
berkeley_tmpdir=mysql_tmpdir;
if (!berkeley_home)
berkeley_home=mysql_real_data_home;
/*
If we don't set set_lg_bsize() we will get into trouble when
trying to use many open BDB tables.
If log buffer is not set, assume that the we will need 512 byte per
open table. This is a number that we have reached by testing.
*/
if (!berkeley_log_buffer_size)
{
berkeley_log_buffer_size= max(table_cache_size*512,32*1024);
}
/*
Berkeley DB require that
berkeley_log_file_size >= berkeley_log_buffer_size*4
*/
berkeley_log_file_size= berkeley_log_buffer_size*4;
berkeley_log_file_size= MY_ALIGN(berkeley_log_file_size,1024*1024L);
berkeley_log_file_size= max(berkeley_log_file_size, 10*1024*1024L);
if (db_env_create(&db_env,0))
DBUG_RETURN(1);
......@@ -136,6 +154,8 @@ bool berkeley_init(void)
1);
db_env->set_cachesize(db_env, 0, berkeley_cache_size, 0);
db_env->set_lg_max(db_env, berkeley_log_file_size);
db_env->set_lg_bsize(db_env, berkeley_log_buffer_size);
db_env->set_lk_detect(db_env, berkeley_lock_type);
if (berkeley_max_lock)
db_env->set_lk_max(db_env, berkeley_max_lock);
......@@ -465,7 +485,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
{
if ((error=db_create(&file, db_env, 0)))
{
free_share(share,table, hidden_primary_key);
free_share(share,table, hidden_primary_key,1);
my_free(rec_buff,MYF(0));
my_free(alloc_ptr,MYF(0));
my_errno=error;
......@@ -482,7 +502,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
2 | 4),
"main", DB_BTREE, open_mode,0))))
{
free_share(share,table, hidden_primary_key);
free_share(share,table, hidden_primary_key,1);
my_free(rec_buff,MYF(0));
my_free(alloc_ptr,MYF(0));
my_errno=error;
......@@ -555,7 +575,7 @@ int ha_berkeley::close(void)
my_free(rec_buff,MYF(MY_ALLOW_ZERO_PTR));
my_free(alloc_ptr,MYF(MY_ALLOW_ZERO_PTR));
DBUG_RETURN(free_share(share,table, hidden_primary_key));
DBUG_RETURN(free_share(share,table, hidden_primary_key,0));
}
......@@ -2077,11 +2097,14 @@ static BDB_SHARE *get_share(const char *table_name, TABLE *table)
return share;
}
static int free_share(BDB_SHARE *share, TABLE *table, uint hidden_primary_key)
static int free_share(BDB_SHARE *share, TABLE *table, uint hidden_primary_key,
bool mutex_is_locked)
{
int error, result = 0;
uint keys=table->keys + test(hidden_primary_key);
pthread_mutex_lock(&bdb_mutex);
if (mutex_is_locked)
pthread_mutex_unlock(&share->mutex);
if (!--share->use_count)
{
DB **key_file = share->key_file;
......
......@@ -168,7 +168,7 @@ class ha_berkeley: public handler
extern bool berkeley_skip, berkeley_shared_data;
extern u_int32_t berkeley_init_flags,berkeley_env_flags, berkeley_lock_type,
berkeley_lock_types[];
extern ulong berkeley_cache_size, berkeley_max_lock;
extern ulong berkeley_cache_size, berkeley_max_lock, berkeley_log_buffer_size;
extern char *berkeley_home, *berkeley_tmpdir, *berkeley_logdir;
extern long berkeley_lock_scan_time;
extern TYPELIB berkeley_lock_typelib;
......
......@@ -22,102 +22,27 @@ Innobase */
/* TODO list for the Innobase handler:
- How to check for deadlocks if Innobase tables are used alongside
other MySQL table types? Should MySQL communicate the locking
information also to Innobase of any object it locks, or should
we use a timeout to detect a deadlock? Solution: there is no problem,
because MySQL requires that all table-level locks are reserved
at transaction startup (conservative locking). Deadlocks cannot
occur because of these locks.
- Innobase cmp function should call MySQL cmp for most datatypes?
Except currently for binary strings and 32-bit integers?
Solution: MySQL has conversion functions which currently convert
any datatype to a binary string which can be compared as binary
strings, except for character strings where we must identify
lower case with upper case.
other MySQL table types? Solution: we will use a timeout.
- MySQL parser should know SELECT FOR UPDATE and SELECT WITH SHARED LOCKS
for Innobase interface. We probably will make the non-locking
consistent read the default in Innobase like in Oracle.
- Does function next_same require matching of the whole last field,
or it is enough that the common prefix of the last field matches?
Answer: it is enough that the common prefix matches.
- Is the 'ref' field in handle pre-allocated to be big enough? Primary key
values can be very long! Answer: we can reallocate it to be long enough.
- DELETE FROM TABLE must not drop the table like it does now, because
consistent read will not work then! Answer: there is probably a flag
in MySQL which we can use to prevent dropping of a table in this case.
-------Oct 24, 2000
- Update trx pointers in 'prebuilt' when the transaction object of
the handle changes. Answer: in 'external_lock' we always set the pointers
to point to the trx of the current user. Note that if a user has
disconnected, then another thd at exactly the same machine address may
be created: just comparing the thd pointers does not really tell if it
actually is the same user using the handle!
- ANSI SQL specifies that if an SQL statement fails because of
an error (like duplicate key, division by zero), the whole statement
must be rolled back. Currently an error like this only rolls
back a single insert of a row, or a single row update.
-------Oct 25, 2000
- There are some autonomous threads within Innobase, like purge (= gc),
ibuf merge, and recovery threads, which may have to open tables.
Then they need type information for the table columns from MySQL.
Could they call 'openfrm' in MySQL? Then they should be properly
initialized pthreads, I presume.
-------Oct 30, 2000
- Dropping of table in Innobase fails if there is a lock set on it:
Innobase then gives an error number to MySQL but MySQL seems to drop
the table from its own data dictionary anyway, causing incoherence
between the two databases.
-------Oct 31, 2000
- In sql_table.cpp in quick_rm_table, the path has to be 'unpacked'
also after the latter sprintf to change / to \ in the path name.
between the two databases. Solution: sleep until locks have been
released.
- Innobase currently includes the path to a table name: the path should
actually be dropped off, because we may move a whole database to a new
directory.
-------Nov 1, 2000
- Ask from Monty what error codes 'record not found' and 'end of table'
exactly mean and when read and fetch operations should return them.
-------Nov 2, 2000
- Find out why in 'test-ATIS' in 'bench' directory, the client does
not seem to receive rows sent by the server: maybe Innobase does not
handle 'decimal' type correctly. Answer: Innobase did not return
'record not found' and 'end of table' with right meanings.
-------Nov 3, 2000
- 'pack' adds field length in front of string type fields: fields of
different length are not correctly alphabetically ordered.
- 'pack' removes leading (or was it, trailing) spaces from string type
fields: maybe it would be better to store them as they are, if the
field is not declared as varchar.
- MySQL 'read_last_with_key' does not allow Innobase to return
HA_ERR_KEY_NOT_FOUND, even when we try to read from an empty
table.
-------Nov 4, 2000
- MySQL should know when row id is added as uniquefier to a table for
update and delete to work.
- Innobase does not really support MySQL varchar type yet.
-------Nov 16, 2000
- We use memcpy to store float and double types to Innobase: this
makes database files not portable between big-endian and little-endian
machines.
-------Nov 17, 2000
- We added call of innobase_close_connection to THD::~THD in sql_class.cpp.
-------Nov 21, 2000
- In mysql_delete, in sql_delete.cpp, we must be able to prevent
MySQL from using generate_table to do a delete: consistent read does
not allow this. Currently, MySQL uses generate_table in DELETE FROM ...
if autocommit is on.
-------Nov 24, 2000
- Make the SELECT in an update a locking read.
- Add a deadlock error message to MySQL.
- Add 'cannot drop locked table' error message to MySQL.
-------Nov 26, 2000
- Find out why MySQL sometimes prints error message about read locks and
write locks associated with a handle.
- Find out why MySQL at shutdown prints error message 'Error on delete of
......pid (Errcode : 2).
-------Nov 30, 2000
- MySQL calls innobase_end (shutdown) before it knows that all handles
have been closed. It declares MySQL shutdown complete before Innobase
shutdown is complete.
*/
#ifdef __GNUC__
......@@ -131,43 +56,50 @@ Innobase */
#include <hash.h>
#include <myisampack.h>
#include "ha_innobase.h"
/* We use the following define in univ.i to remove a conflicting definition
of type 'byte' in univ.i, different from MySQL definition */
#define INSIDE_HA_INNOBASE_CC
/* NOTE! When we include univ.i below, bool will be defined in the Innobase
way as an unsigned long int! In MySQL source code bool may be char. */
/* Include necessary Innobase headers */
extern "C" {
#include <univmysql.i>
#include <srv0start.h>
#include <srv0srv.h>
#include <trx0roll.h>
#include <trx0trx.h>
#include <row0ins.h>
#include <row0mysql.h>
#include <row0sel.h>
#include <row0upd.h>
#include <log0log.h>
#include <dict0crea.h>
#include <btr0cur.h>
#include <btr0btr.h>
#include "../innobase/include/univ.i"
#include "../innobase/include/srv0start.h"
#include "../innobase/include/srv0srv.h"
#include "../innobase/include/trx0roll.h"
#include "../innobase/include/trx0trx.h"
#include "../innobase/include/row0ins.h"
#include "../innobase/include/row0mysql.h"
#include "../innobase/include/row0sel.h"
#include "../innobase/include/row0upd.h"
#include "../innobase/include/log0log.h"
#include "../innobase/include/dict0crea.h"
#include "../innobase/include/btr0cur.h"
#include "../innobase/include/btr0btr.h"
}
#include "ha_innobase.h"
#define HA_INNOBASE_ROWS_IN_TABLE 10000 /* to get optimization right */
#define HA_INNOBASE_RANGE_COUNT 100
const char* ha_innobase_ext = ".ib";
bool innobase_skip = 0;
mysql_bool innobase_skip = 0;
uint innobase_init_flags = 0;
ulong innobase_cache_size = 0;
long innobase_mirrored_log_groups, innobase_mirrored_log_groups,
long innobase_mirrored_log_groups, innobase_log_files_in_group,
innobase_log_file_size, innobase_log_buffer_size,
innobase_buffer_pool_size, innobase_additional_mem_pool_size,
innobase_file_io_threads;
char *innobase_data_home_dir, *innobase_data_file_path;
char *innobase_log_group_home_dir, *innobase_log_arch_dir;
bool innobase_flush_log_at_trx_commit,innobase_log_archive;
mysql_bool innobase_flush_log_at_trx_commit, innobase_log_archive,
innobase_use_native_aio;
/* innobase_data_file_path=ibdata:15,idata2:1,... */
......@@ -182,9 +114,9 @@ ulong innobase_select_counter = 0;
char* innobase_home = NULL;
pthread_mutex_t innb_mutex;
pthread_mutex_t innobase_mutex;
static HASH innb_open_tables;
static HASH innobase_open_tables;
static byte* innobase_get_key(INNOBASE_SHARE *share,uint *length,
my_bool not_used __attribute__((unused)));
......@@ -261,12 +193,20 @@ check_trx_exists(
assert(thd != NULL);
trx = (trx_t*) thd->transaction.innobase_trx_handle;
trx = (trx_t*) thd->transaction.all.innobase_tid;
if (trx == NULL) {
trx = trx_allocate_for_mysql();
thd->transaction.innobase_trx_handle = trx;
thd->transaction.all.innobase_tid = trx;
/* The execution of a single SQL statement is denoted by
a 'transaction' handle which is a NULL pointer: Innobase
remembers internally where the latest SQL statement
started, and if error handling requires rolling back the
latest statement, Innobase does a rollback to a savepoint. */
thd->transaction.stmt.innobase_tid = NULL;
}
return(trx);
......@@ -298,25 +238,226 @@ ha_innobase::update_thd(
return(0);
}
/*************************************************************************
Reads the data files and their sizes from a character string given in
the .cnf file. */
static
mysql_bool
innobase_parse_data_file_paths_and_sizes(void)
/*==========================================*/
/* out: ((mysql_bool)TRUE) if ok,
((mysql_bool)FALSE) if parsing
error */
{
char* str;
char* endp;
char* path;
ulint size;
ulint i = 0;
str = innobase_data_file_path;
/* First calculate the number of data files and check syntax:
path:size[M];path:size[M]... */
while (*str != '\0') {
path = str;
while (*str != ':' && *str != '\0') {
str++;
}
if (*str == '\0') {
return(((mysql_bool)FALSE));
}
str++;
size = strtoul(str, &endp, 10);
str = endp;
if (*str != 'M') {
size = size / (1024 * 1024);
} else {
str++;
}
if (size == 0) {
return(((mysql_bool)FALSE));
}
i++;
if (*str == ';') {
str++;
} else if (*str != '\0') {
return(((mysql_bool)FALSE));
}
}
srv_data_file_names = (char**) ut_malloc(i * sizeof(void*));
srv_data_file_sizes = (ulint*)ut_malloc(i * sizeof(ulint));
srv_n_data_files = i;
/* Then store the actual values to our arrays */
str = innobase_data_file_path;
i = 0;
while (*str != '\0') {
path = str;
while (*str != ':' && *str != '\0') {
str++;
}
if (*str == ':') {
/* Make path a null-terminated string */
*str = '\0';
str++;
}
size = strtoul(str, &endp, 10);
str = endp;
if (*str != 'M') {
size = size / (1024 * 1024);
} else {
str++;
}
srv_data_file_names[i] = path;
srv_data_file_sizes[i] = size;
i++;
if (*str == ';') {
str++;
}
}
return(((mysql_bool)TRUE));
}
/*************************************************************************
Reads log group home directories from a character string given in
the .cnf file. */
static
mysql_bool
innobase_parse_log_group_home_dirs(void)
/*====================================*/
/* out: ((mysql_bool)TRUE) if ok,
((mysql_bool)FALSE) if parsing
error */
{
char* str;
char* path;
ulint i = 0;
str = innobase_log_group_home_dir;
/* First calculate the number of directories and check syntax:
path;path;... */
while (*str != '\0') {
path = str;
while (*str != ';' && *str != '\0') {
str++;
}
i++;
if (*str == ';') {
str++;
} else if (*str != '\0') {
return(((mysql_bool)FALSE));
}
}
if (i != (ulint) innobase_mirrored_log_groups) {
return(((mysql_bool)FALSE));
}
srv_log_group_home_dirs = (char**) ut_malloc(i * sizeof(void*));
/* Then store the actual values to our array */
str = innobase_log_group_home_dir;
i = 0;
while (*str != '\0') {
path = str;
while (*str != ';' && *str != '\0') {
str++;
}
if (*str == ';') {
*str = '\0';
str++;
}
srv_log_group_home_dirs[i] = path;
i++;
}
return(((mysql_bool)TRUE));
}
/*************************************************************************
Opens an Innobase database. */
bool
mysql_bool
innobase_init(void)
/*===============*/
/* out: TRUE if error */
/* out: ((mysql_bool)TRUE) if error */
{
int err;
mysql_bool ret;
DBUG_ENTER("innobase_init");
if (!innobase_home) {
innobase_home = mysql_real_data_home;
/* Set Innobase initialization parameters according to the values
read from MySQL .cnf file */
printf("Innobase home is %s\n", innobase_home);
}
srv_data_home = innobase_data_home_dir;
srv_logs_home = "";
srv_arch_dir = innobase_log_arch_dir;
ret = innobase_parse_data_file_paths_and_sizes();
if (ret == ((mysql_bool)FALSE)) {
return(((mysql_bool)TRUE));
}
ret = innobase_parse_log_group_home_dirs();
if (ret == ((mysql_bool)FALSE)) {
return(((mysql_bool)TRUE));
}
srv_n_log_groups = (ulint) innobase_mirrored_log_groups;
srv_n_log_files = (ulint) innobase_log_files_in_group;
srv_log_file_size = (ulint) innobase_log_file_size;
srv_log_archive_on = (ulint) innobase_log_archive;
srv_log_buffer_size = (ulint) innobase_log_buffer_size;
srv_flush_log_at_trx_commit = (ulint) innobase_flush_log_at_trx_commit;
err = innobase_start_or_create_for_mysql(innobase_home);
srv_use_native_aio = (ulint) innobase_use_native_aio;
srv_pool_size = (ulint) innobase_buffer_pool_size;
srv_mem_pool_size = (ulint) innobase_additional_mem_pool_size;
srv_n_file_io_threads = (ulint) innobase_file_io_threads;
err = innobase_start_or_create_for_mysql();
if (err != DB_SUCCESS) {
......@@ -331,10 +472,10 @@ innobase_init(void)
/***********************************************************************
Closes an Innobase database. */
bool
mysql_bool
innobase_end(void)
/*==============*/
/* out: TRUE if error */
/* out: ((mysql_bool)TRUE) if error */
{
int err;
......@@ -354,12 +495,12 @@ innobase_end(void)
Flushes Innobase logs to disk and makes a checkpoint. Really, a commit
flushes logs, and the name of this function should be innobase_checkpoint. */
bool
mysql_bool
innobase_flush_logs(void)
/*=====================*/
/* out: TRUE if error */
/* out: ((mysql_bool)TRUE) if error */
{
bool result = 0;
mysql_bool result = 0;
DBUG_ENTER("innobase_flush_logs");
......@@ -375,8 +516,11 @@ int
innobase_commit(
/*============*/
/* out: 0 or error number */
THD* thd) /* in: MySQL thread handle of the user for whom
THD* thd, /* in: MySQL thread handle of the user for whom
the transaction should be committed */
void* trx_handle)/* in: Innobase trx handle or NULL: NULL means
that the current SQL statement ended, and we should
mark the start of a new statement with a savepoint */
{
int error = 0;
......@@ -385,7 +529,13 @@ innobase_commit(
check_trx_exists(thd);
trx_commit_for_mysql((trx_t*)(thd->transaction.innobase_trx_handle));
if (trx_handle) {
trx_commit_for_mysql(
(trx_t*) (thd->transaction.all.innobase_tid));
} else {
trx_mark_sql_stat_end(
(trx_t*) (thd->transaction.all.innobase_tid));
}
#ifndef DBUG_OFF
if (error) {
......@@ -407,8 +557,11 @@ int
innobase_rollback(
/*==============*/
/* out: 0 or error number */
THD* thd) /* in: handle to the MySQL thread of the user
THD* thd, /* in: handle to the MySQL thread of the user
whose transaction should be rolled back */
void* trx_handle)/* in: Innobase trx handle or NULL: NULL means
that the current SQL statement should be rolled
back */
{
int error = 0;
......@@ -417,8 +570,13 @@ innobase_rollback(
check_trx_exists(thd);
error = trx_rollback_for_mysql((trx_t*)
(thd->transaction.innobase_trx_handle));
if (trx_handle) {
error = trx_rollback_for_mysql(
(trx_t*) (thd->transaction.all.innobase_tid));
} else {
error = trx_rollback_last_sql_stat_for_mysql(
(trx_t*) (thd->transaction.all.innobase_tid));
}
DBUG_RETURN(convert_error_code_to_mysql(error));
}
......@@ -434,10 +592,10 @@ innobase_close_connection(
THD* thd) /* in: handle to the MySQL thread of the user
whose transaction should be rolled back */
{
if (NULL != thd->transaction.innobase_trx_handle) {
if (NULL != thd->transaction.all.innobase_tid) {
trx_free_for_mysql((trx_t*)
(thd->transaction.innobase_trx_handle));
(thd->transaction.all.innobase_tid));
}
return(0);
......@@ -482,7 +640,7 @@ ha_innobase::open(
/* out: 1 if error, 0 if success */
const char* name, /* in: table name */
int mode, /* in: not used */
int test_if_locked) /* in: not used */
uint test_if_locked) /* in: not used */
{
int error = 0;
uint buff_len;
......@@ -1042,12 +1200,14 @@ ha_innobase::store_key_val_for_row(
/******************************************************************
Convert a row in MySQL format to a row in Innobase format. Uses rec_buff
of the handle. */
static
void
ha_innobase::convert_row_to_innobase(
/*=================================*/
convert_row_to_innobase(
/*====================*/
dtuple_t* row, /* in/out: row in Innobase format */
char* record) /* in: row in MySQL format */
char* record, /* in: row in MySQL format */
byte* rec_buff,/* in: record buffer */
struct st_table* table) /* in: table in MySQL data dictionary */
{
Field* field;
dfield_t* dfield;
......@@ -1083,12 +1243,13 @@ ha_innobase::convert_row_to_innobase(
/******************************************************************
Convert a row in Innobase format to a row in MySQL format. */
static
void
ha_innobase::convert_row_to_mysql(
/*==============================*/
convert_row_to_mysql(
/*=================*/
char* record, /* in/out: row in MySQL format */
dtuple_t* row) /* in: row in Innobase format */
dtuple_t* row, /* in: row in Innobase format */
struct st_table* table) /* in: table in MySQL data dictionary */
{
Field* field;
dfield_t* dfield;
......@@ -1124,10 +1285,10 @@ ha_innobase::convert_row_to_mysql(
Converts a key value stored in MySQL format to an Innobase dtuple.
The last field of the key value may be just a prefix of a fixed length
field: hence the parameter key_len. */
static
dtuple_t*
ha_innobase::convert_key_to_innobase(
/*=================================*/
convert_key_to_innobase(
/*====================*/
dtuple_t* tuple, /* in/out: an Innobase dtuple which
must contain enough fields to be
able to store the key value */
......@@ -1231,7 +1392,7 @@ ha_innobase::write_row(
update_auto_increment();
}
assert(user_thd->transaction.innobase_trx_handle);
assert(user_thd->transaction.all.innobase_tid);
trx = check_trx_exists(user_thd);
/* Convert the MySQL row into an Innobase dtuple format */
......@@ -1240,7 +1401,7 @@ ha_innobase::write_row(
(row_prebuilt_t*) innobase_prebuilt,
(dict_table_t*) innobase_table_handle, trx);
convert_row_to_innobase(row, (char*) record);
convert_row_to_innobase(row, (char*) record, rec_buff, table);
error = row_insert_for_mysql((row_prebuilt_t*)innobase_prebuilt, trx);
......@@ -1257,16 +1418,19 @@ ha_innobase::write_row(
/**************************************************************************
Checks which fields have changed in a row and stores information
of them to an update vector. */
static
int
ha_innobase::calc_row_difference(
/*=============================*/
calc_row_difference(
/*================*/
/* out: error number or 0 */
upd_t* uvect, /* in/out: update vector */
byte* old_row, /* in: old row in MySQL format */
byte* new_row) /* in: new row in MySQL format */
byte* new_row, /* in: new row in MySQL format */
struct st_table* table, /* in: table in MySQL data dictionary */
byte* upd_buff, /* in: buffer to use */
row_prebuilt_t* prebuilt,/* in: Innobase prebuilt struct */
void* innobase_table_handle) /* in: Innobase table handle */
{
row_prebuilt_t* prebuilt = (row_prebuilt_t*) innobase_prebuilt;
Field* field;
uint n_fields;
ulint o_len;
......@@ -1353,7 +1517,7 @@ ha_innobase::update_row(
DBUG_ENTER("update_row");
assert(user_thd->transaction.innobase_trx_handle);
assert(user_thd->transaction.all.innobase_tid);
trx = check_trx_exists(user_thd);
uvect = row_get_prebuilt_update_vector(
......@@ -1363,13 +1527,14 @@ ha_innobase::update_row(
/* Build old row in the Innobase format (uses rec_buff of the
handle) */
convert_row_to_innobase(prebuilt->row_tuple, (char*) old_row);
convert_row_to_innobase(prebuilt->row_tuple, (char*) old_row,
rec_buff, table);
/* Build an update vector from the modified fields in the rows
(uses upd_buff of the handle) */
calc_row_difference(uvect, (byte*) old_row, new_row);
calc_row_difference(uvect, (byte*) old_row, new_row, table, upd_buff,
prebuilt, innobase_table_handle);
/* This is not a delete */
prebuilt->upd_node->is_delete = FALSE;
......@@ -1402,7 +1567,7 @@ ha_innobase::delete_row(
DBUG_ENTER("update_row");
assert(user_thd->transaction.innobase_trx_handle);
assert(user_thd->transaction.all.innobase_tid);
trx = check_trx_exists(user_thd);
uvect = row_get_prebuilt_update_vector(
......@@ -1412,8 +1577,8 @@ ha_innobase::delete_row(
/* Build old row in the Innobase format (uses rec_buff of the
handle) */
convert_row_to_innobase(prebuilt->row_tuple, (char*) record);
convert_row_to_innobase(prebuilt->row_tuple, (char*) record,
rec_buff, table);
/* This is a delete */
prebuilt->upd_node->is_delete = TRUE;
......@@ -1527,7 +1692,7 @@ ha_innobase::index_read(
/* TODO: currently we assume all reads perform consistent read! */
/* prebuilt->consistent_read = TRUE; */
assert(user_thd->transaction.innobase_trx_handle);
assert(user_thd->transaction.all.innobase_tid);
trx = check_trx_exists(user_thd);
pcur = prebuilt->pcur;
......@@ -1538,7 +1703,7 @@ ha_innobase::index_read(
if (key_ptr) {
convert_key_to_innobase(prebuilt->search_tuple, key_val_buff,
index, key, (unsigned char*) key_ptr,
index, key, (byte*) key_ptr,
(int) key_len);
} else {
/* We position the cursor to the last or the first entry
......@@ -1571,7 +1736,7 @@ ha_innobase::index_read(
trx, &mtr, 0);
if (ret == DB_SUCCESS) {
convert_row_to_mysql((char*) buf, prebuilt->row_tuple);
convert_row_to_mysql((char*) buf, prebuilt->row_tuple, table);
error = 0;
table->status = 0;
......@@ -1687,7 +1852,7 @@ ha_innobase::general_fetch(
ret = row_search_for_mysql(prebuilt->row_tuple, 0, prebuilt,
match_mode, trx, &mtr, direction);
if (ret == DB_SUCCESS) {
convert_row_to_mysql((char*) buf, prebuilt->row_tuple);
convert_row_to_mysql((char*) buf, prebuilt->row_tuple, table);
error = 0;
table->status = 0;
......@@ -1814,7 +1979,7 @@ int
ha_innobase::rnd_init(
/*==================*/
/* out: 0 or error number */
bool scan) /* in: ???????? */
mysql_bool scan) /* in: ???????? */
{
row_prebuilt_t* prebuilt = (row_prebuilt_t*) innobase_prebuilt;
......@@ -1931,7 +2096,8 @@ ha_innobase::info(
} else if (flag & HA_STATUS_ERRKEY) {
errkey = -1; /* TODO: get the key number from Innobase */
errkey = (unsigned int)-1; /* TODO: get the key number from
Innobase */
}
DBUG_VOID_RETURN;
......@@ -1948,9 +2114,12 @@ int ha_innobase::reset(void)
}
/**********************************************************************
As MySQL will execute an external lock for every new table it uses
we can use this to store the pointer to the THD in the handle. We use this
also in explicit locking of tables by request of the user. */
As MySQL will execute an external lock for every new table it uses when it
starts to process an SQL statement, we can use this function to store the
pointer to the THD in the handle. We will also use this function to communicate
to Innobase that a new SQL statement has started and that we must store a
savepoint to our transaction handle, so that we are able to roll back
the SQL statement in case of an error. */
int
ha_innobase::external_lock(
......@@ -1958,30 +2127,65 @@ ha_innobase::external_lock(
THD* thd, /* in: handle to the user thread */
int lock_type) /* in: lock type */
{
int error = 0;
row_prebuilt_t* prebuilt = (row_prebuilt_t*) innobase_prebuilt;
int error = 0;
trx_t* trx;
DBUG_ENTER("ha_innobase::external_lock");
update_thd(thd);
prebuilt->sql_stat_start = TRUE;
trx = check_trx_exists(thd);
if (lock_type != F_UNLCK) {
if (trx->n_mysql_tables_in_use == 0) {
trx_mark_sql_stat_end(trx);
}
trx->n_mysql_tables_in_use++;
} else {
trx->n_mysql_tables_in_use--;
}
DBUG_RETURN(error);
}
/* Currently, the following does nothing in Innobase: */
THR_LOCK_DATA **ha_innobase::store_lock(THD *thd, THR_LOCK_DATA **to,
enum thr_lock_type lock_type)
{
if (lock_type != TL_IGNORE && lock.type == TL_UNLOCK)
{
/* If we are not doing a LOCK TABLE, then allow multiple writers */
if ((lock_type >= TL_WRITE_CONCURRENT_INSERT &&
lock_type <= TL_WRITE) &&
!thd->in_lock_tables)
lock_type = TL_WRITE_ALLOW_WRITE;
lock.type=lock_type;
}
*to++= &lock;
return(to);
/*********************************************************************
Stores a MySQL lock into a 'lock' field in a handle. */
THR_LOCK_DATA**
ha_innobase::store_lock(
/*====================*/
/* out: pointer to the next
element in the 'to' array */
THD* thd, /* in: user thread handle */
THR_LOCK_DATA** to, /* in: pointer to an array
of pointers to lock structs;
pointer to the 'lock' field
of current handle is stored
next to this array */
enum thr_lock_type lock_type) /* in: lock type to store in
'lock' */
{
if (lock_type != TL_IGNORE && lock.type == TL_UNLOCK) {
/* If we are not doing a LOCK TABLE, then allow multiple
writers */
if ((lock_type >= TL_WRITE_CONCURRENT_INSERT &&
lock_type <= TL_WRITE) && !thd->in_lock_tables) {
lock_type = TL_WRITE_ALLOW_WRITE;
}
lock.type=lock_type;
}
*to++= &lock;
return(to);
}
/*********************************************************************
......@@ -2206,15 +2410,17 @@ ha_innobase::create(
}
/*********************************************************************
Drops a table from an Innobase database. No one is allowed to have
locks on the table, not even the calling user when the table is
dropped. */
Drops a table from an Innobase database. Before calling this function,
MySQL calls innobase_commit to commit the transaction of the current user.
Then the current user cannot have locks set on the table. Drop table
operation inside Innobase will wait sleeping in a loop until no other
user has locks on the table. */
int
ha_innobase::delete_table(
/*======================*/
/* out: error number */
const char* name) /* in: table name */
/* out: error number */
const char* name) /* in: table name */
{
ulint name_len;
int error;
......@@ -2340,10 +2546,10 @@ ha_innobase::records_in_range(
byte* key_val_buff2 = (byte*) my_malloc(table->reclength,
MYF(MY_WME));
dtuple_t* range_end;
mem_heap_t* heap;
ulint n_rows;
ulint mode1;
ulint mode2;
void* heap;
DBUG_ENTER("records_in_range");
......@@ -2363,9 +2569,7 @@ ha_innobase::records_in_range(
/* For the second key value we have to use allocated buffers: */
heap = mem_heap_create(100);
range_end = dtuple_create(heap, key->key_parts);
range_end = dtuple_create_for_mysql(&heap, key->key_parts);
convert_key_to_innobase(range_end, key_val_buff2, index,
key, (byte*) end_key, (int) end_key_len);
......@@ -2375,7 +2579,7 @@ ha_innobase::records_in_range(
n_rows = btr_estimate_n_rows_in_range(index, prebuilt->search_tuple,
mode1, range_end, mode2);
mem_heap_free(heap);
dtuple_free_for_mysql(heap);
my_free((char*) key_val_buff2, MYF(0));
DBUG_RETURN((ha_rows) n_rows);
......@@ -2398,7 +2602,8 @@ static INNOBASE_SHARE *get_share(const char *table_name)
INNOBASE_SHARE *share;
pthread_mutex_lock(&innobase_mutex);
uint length=(uint) strlen(table_name);
if (!(share=(INNOBASE_SHARE*) hash_search(&innobase_open_tables, table_name,
if (!(share=(INNOBASE_SHARE*) hash_search(&innobase_open_tables,
(byte*) table_name,
length)))
{
if ((share=(INNOBASE_SHARE *) my_malloc(sizeof(*share)+length+1,
......@@ -2407,7 +2612,7 @@ static INNOBASE_SHARE *get_share(const char *table_name)
share->table_name_length=length;
share->table_name=(char*) (share+1);
strmov(share->table_name,table_name);
if (hash_insert(&innobase_open_tables, (char*) share))
if (hash_insert(&innobase_open_tables, (byte*) share))
{
pthread_mutex_unlock(&innobase_mutex);
my_free((gptr) share,0);
......@@ -2427,7 +2632,7 @@ static void free_share(INNOBASE_SHARE *share)
pthread_mutex_lock(&innobase_mutex);
if (!--share->use_count)
{
hash_delete(&innobase_open_tables, (gptr) share);
hash_delete(&innobase_open_tables, (byte*) share);
thr_lock_delete(&share->lock);
pthread_mutex_destroy(&share->mutex);
my_free((gptr) share, MYF(0));
......
......@@ -21,15 +21,13 @@
#pragma interface /* gcc class implementation */
#endif
/* Store the MySQL bool type definition to this defined type:
inside ha_innobase we use the Innobase definition of the bool type! */
typedef bool mysql_bool;
/* This file defines the Innobase handler: the interface between MySQL and
Innobase */
extern "C" {
#include <data0types.h>
#include <dict0types.h>
#include <row0types.h>
}
typedef struct st_innobase_share {
THR_LOCK lock;
pthread_mutex_t mutex;
......@@ -73,12 +71,6 @@ class ha_innobase: public handler
ulong max_row_length(const byte *buf);
uint store_key_val_for_row(uint keynr, char* buff, const byte* record);
void convert_row_to_innobase(dtuple_t* row, char* record);
void convert_row_to_mysql(char* record, dtuple_t* row);
dtuple_t* convert_key_to_innobase(dtuple_t* tuple, byte* buf,
dict_index_t* index,
KEY* key, byte* key_ptr, int key_len);
int calc_row_difference(upd_t* uvect, byte* old_row, byte* new_row);
int update_thd(THD* thd);
int change_active_index(uint keynr);
int general_fetch(byte* buf, uint direction, uint match_mode);
......@@ -110,7 +102,7 @@ class ha_innobase: public handler
bool fast_key_read() { return 1;}
bool has_transactions() { return 1;}
int open(const char *name, int mode, int test_if_locked);
int open(const char *name, int mode, uint test_if_locked);
void initialize(void);
int close(void);
double scan_time();
......@@ -162,13 +154,14 @@ extern uint innobase_init_flags, innobase_lock_type;
extern ulong innobase_cache_size;
extern char *innobase_home, *innobase_tmpdir, *innobase_logdir;
extern long innobase_lock_scan_time;
extern long innobase_mirrored_log_groups, innobase_mirrored_log_groups;
extern long innobase_mirrored_log_groups, innobase_log_files_in_group;
extern long innobase_log_file_size, innobase_log_buffer_size;
extern long innobase_buffer_pool_size, innobase_additional_mem_pool_size;
extern long innobase_file_io_threads;
extern char *innobase_data_home_dir, *innobase_data_file_path;
extern char *innobase_log_group_home_dir, *innobase_log_arch_dir;
extern bool innobase_flush_log_at_trx_commit,innobase_log_archive;
extern bool innobase_flush_log_at_trx_commit, innobase_log_archive,
innobase_use_native_aio;
extern TYPELIB innobase_lock_typelib;
......@@ -176,7 +169,6 @@ bool innobase_init(void);
bool innobase_end(void);
bool innobase_flush_logs(void);
int innobase_commit(THD *thd);
int innobase_rollback(THD *thd);
int innobase_commit(THD *thd, void* trx_handle);
int innobase_rollback(THD *thd, void* trx_handle);
int innobase_close_connection(THD *thd);
......@@ -952,7 +952,7 @@ void sql_print_error(const char *format,...)
#ifndef DBUG_OFF
{
char buff[1024];
vsprintf(buff,format,args);
vsnprintf(buff,sizeof(buff)-1,format,args);
DBUG_PRINT("error",("%s",buff));
}
#endif
......
......@@ -1190,7 +1190,11 @@ static void init_signals(void)
struct sigaction sa; sa.sa_flags = 0;
sigemptyset(&sa.sa_mask);
sigprocmask(SIG_SETMASK,&sa.sa_mask,NULL);
sa.sa_handler=handle_segfault;
#ifdef HAVE_DARWIN_THREADS
sa.sa_handler=( void (*)() ) handle_segfault;
#else
sa.sa_handler=handle_segfault;
#endif
sigaction(SIGSEGV, &sa, NULL);
(void) sigemptyset(&set);
#ifdef THREAD_SPECIFIC_SIGPIPE
......@@ -2512,6 +2516,8 @@ CHANGEABLE_VAR changeable_vars[] = {
#ifdef HAVE_BERKELEY_DB
{ "bdb_cache_size", (long*) &berkeley_cache_size,
KEY_CACHE_SIZE, 20*1024, (long) ~0, 0, IO_SIZE },
{"bdb_log_buffer_size", (long*) &berkeley_log_buffer_size, 0, 256*1024L,
~0L, 0, 1024},
{ "bdb_max_lock", (long*) &berkeley_max_lock,
10000, 0, (long) ~0, 0, 1 },
/* QQ: The following should be removed soon! */
......@@ -2622,6 +2628,7 @@ struct show_var_st init_vars[]= {
{"basedir", mysql_home, SHOW_CHAR},
#ifdef HAVE_BERKELEY_DB
{"bdb_cache_size", (char*) &berkeley_cache_size, SHOW_LONG},
{"bdb_log_buffer_size", (char*) &berkeley_log_buffer_size, SHOW_LONG},
{"bdb_home", (char*) &berkeley_home, SHOW_CHAR_PTR},
{"bdb_max_lock", (char*) &berkeley_max_lock, SHOW_LONG},
{"bdb_logdir", (char*) &berkeley_logdir, SHOW_CHAR_PTR},
......@@ -2812,7 +2819,7 @@ static void usage(void)
Don't flush key buffers between writes for any MyISAM\n\
table\n\
--enable-locking Enable system locking\n\
-T, --exit-info Print some debug info at exit\n\
-T, --exit-info Used for debugging; Use at your own risk!\n\
--flush Flush tables to disk between SQL commands\n\
-?, --help Display this help and exit\n\
--init-file=file Read SQL commands from this file at startup\n\
......
......@@ -4837,7 +4837,8 @@ end_write_group(JOIN *join, JOIN_TAB *join_tab __attribute__((unused)),
/*****************************************************************************
** Remove calculation with tables that aren't yet read. Remove also tests
** against fields that are read through key.
** against fields that are read through key where the table is not a
** outer join table.
** We can't remove tests that are made against columns which are stored
** in sorted order.
*****************************************************************************/
......@@ -4853,7 +4854,8 @@ static bool test_if_ref(Item_field *left_item,Item *right_item)
if (ref_item && ref_item->eq(right_item))
{
if (right_item->type() == Item::FIELD_ITEM)
return field->eq_def(((Item_field *) right_item)->field);
return (field->eq_def(((Item_field *) right_item)->field) &&
!field->table->maybe_null);
if (right_item->const_item())
{
// We can remove binary fields and numerical fields except float,
......
......@@ -17,7 +17,7 @@ use DBI;
use Getopt::Long;
$| = 1;
$VER = "2.0";
$VER = "2.1";
$opt_help = 0;
$opt_version = 0;
......@@ -40,6 +40,32 @@ my ($dbh, $progname, $mail_no_from_f, $mail_no_txt_f, $mail_too_big,
$mail_no_from_f = $mail_no_txt_f = $mail_too_big = $mail_forwarded =
$mail_duplicates = $mail_no_subject_f = $mail_inserted = 0;
$mail_fixed=0;
#
# Remove the following message-ends from message
#
@remove_tail= (
"\n-*\nSend a mail to .*\n.*\n.*\$",
"\n-*\nPlease check .*\n.*\n\nTo unsubscribe, .*\n.*\n.*\nIf you have a broken.*\n.*\n.*\$",
"\n-*\nPlease check .*\n(.*\n){1,3}\nTo unsubscribe.*\n.*\n.*\$",
"\n-*\nPlease check .*\n.*\n\nTo unsubscribe.*\n.*\$",
"\n-*\nTo request this thread.*\nTo unsubscribe.*\n.*\.*\n.*\$",
"\n -*\n.*Send a mail to.*\n.*\n.*unsubscribe.*\$",
"\n-*\nTo request this thread.*\n\nTo unsubscribe.*\n.*\$"
);
# Generate regexp to remove tails where the unsubscribed is quoted
{
my (@tmp, $tail);
@tmp=();
foreach $tail (@remove_tail)
{
$tail =~ s/\n/\n[> ]*/g;
push(@tmp, $tail);
}
push @remove_tail,@tmp;
}
my %months = ('Jan' => 1, 'Feb' => 2, 'Mar' => 3, 'Apr' => 4, 'May' => 5,
'Jun' => 6, 'Jul' => 7, 'Aug' => 8, 'Sep' => 9, 'Oct' => 10,
......@@ -90,7 +116,8 @@ sub main
push @args, "mysql_socket=$opt_socket" if defined($opt_socket);
push @args, "mysql_read_default_group=mail_to_db";
$connect_arg .= join ';', @args;
$dbh = DBI->connect("$connect_arg", $opt_user, $opt_password)
$dbh = DBI->connect("$connect_arg", $opt_user, $opt_password,
{ PrintError => 0})
|| die "Couldn't connect: $DBI::errstr\n";
die "You must specify the database; use --db=" if (!defined($opt_db));
......@@ -127,6 +154,7 @@ sub main
print "Total number of mails:\t\t";
print $mail_inserted + $ignored;
print "\n";
print "Mails with unsubscribe removed:\t$mail_fixed\n";
exit(0);
}
......@@ -279,6 +307,9 @@ sub date_parser
print "Inbox filename: $file_name\n";
}
exit(1) if ($opt_stop_on_error);
$values->{'date'} = "";
$values->{'time_zone'} = "";
return;
}
$tmp = $3 . "-" . $months{$2} . "-" . "$1 $4";
$tmp.= defined($5) ? $5 : ":00";
......@@ -294,15 +325,29 @@ sub date_parser
sub update_table
{
my($dbh, $file_name, $values) = @_;
my($q);
my($q,$tail,$message);
if (!defined($values->{'subject'}) || !defined($values->{'to'}))
{
$mail_no_subject_f++;
return; # Ignore these
}
$values->{'message'} =~ s/^\s*//; #removes whitespaces from the beginning
$values->{'message'} =~ s/\s*$//; #removes whitespaces from the end
$message=$values->{'message'};
$message =~ s/^\s*//; #removes whitespaces from the beginning
restart:
$message =~ s/[\s\n>]*$//; #removes whitespaces and '>' from the end
$values->{'message'}=$message;
foreach $tail (@remove_tail)
{
$message =~ s/$tail//;
}
if ($message ne $values->{'message'})
{
$message =~ s/\s*$//; #removes whitespaces from the end
$mail_fixed++;
goto restart; # Some mails may have duplicated messages
}
$q = "INSERT INTO $opt_table (";
$q.= "mail_id,";
......@@ -320,7 +365,8 @@ sub update_table
$q.= "NULL,";
$q.= "'" . $values->{'date'} . "',";
$q.= (defined($values->{'time_zone'}) ?
("'" . $values->{'time_zone'} . "',") : "NULL,");
$dbh->quote($values->{'time_zone'}) : "NULL");
$q.= ",";
$q.= defined($values->{'from'}) ? $dbh->quote($values->{'from'}) : "NULL";
$q.= ",";
$q.= defined($values->{'reply'}) ? $dbh->quote($values->{'reply'}) : "NULL";
......@@ -331,7 +377,7 @@ sub update_table
$q.= ",";
$q.= $dbh->quote($values->{'subject'});
$q.= ",";
$q.= $dbh->quote($values->{'message'});
$q.= $dbh->quote($message);
$q.= ",";
$q.= $dbh->quote($file_name);
$q.= ",";
......@@ -339,12 +385,12 @@ sub update_table
$q.= ")";
# Don't insert mails bigger than $opt_max_mail_size
if (length($values->{'message'}) > $opt_max_mail_size)
if (length($message) > $opt_max_mail_size)
{
$mail_too_big++;
}
# Don't insert mails without 'From' field
elsif ($values->{'from'} eq "")
elsif (!defined($values->{'from'}) || $values->{'from'} eq "")
{
$mail_no_from_f++;
}
......@@ -354,7 +400,7 @@ sub update_table
$mail_inserted++;
}
# Don't insert mails without the 'message'
elsif ($values->{'message'} eq "")
elsif ($message eq "")
{
$mail_no_txt_f++;
}
......
......@@ -9,7 +9,7 @@
use DBI;
use Getopt::Long;
$VER="1.4a";
$VER="1.5";
@fldnms= ("mail_from","mail_to","cc","date","time_zone","file","sbj","txt");
$fields=8;
......@@ -18,7 +18,7 @@ $fields=8;
$opt_user= $opt_password= "";
$opt_socket= "/tmp/mysql.sock";
$opt_port= 3306;
$opt_db="test";
$opt_db="mail";
$opt_table="mails";
$opt_help=$opt_count=0;
......@@ -61,7 +61,7 @@ foreach $val (@fldnms)
}
$fields++;
}
$query.= " from $opt_table where $ARGV[0]";
$query.= " from $opt_table where $ARGV[0] order by date desc";
####
#### Send query and save result
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment