Commit 2e52dc6b authored by Andreas Jung's avatar Andreas Jung

new Medusa release

parent 6f5ae03a
-----BEGIN PGP SIGNED MESSAGE-----
I am proud to announce the first alpha release of Medusa.
Medusa is a 'server platform' - it provides a framework for
implementing asynchronous socket-based servers (tcp/ip and on unix,
unix domain sockets).
An asynchronous socket server is a server that can communicate with
many other socket clients and servers simultaneously, by multiplexing
I/O within a single process/thread. In the context of an HTTP server,
this means a single process can serve hundreds or even thousands of
clients, depending only on the operating system's configuration and
limitations.
There are several advantages to this approach:
o performance - no fork() or thread() start-up costs per hit.
o scalability - the overhead per client can be kept rather small,
on the order of several kilobytes of memory.
o persistence - a single-process server can easily coordinate the
actions of several different connections. This makes things like
proxy servers and gateways easy to implement. It also makes it
possible to share resources like database handles.
This first release of Medusa includes HTTP, FTP, and 'monitor' (remote
python interpreter) servers. Medusa can simultaneously support
several instances of either the same or different server types - for
example you could start up two HTTP servers, an FTP server, and a
monitor server. Then you could connect to the monitor server to
control and manipulate medusa while it is running.
Other servers and clients have been written (SMTP, POP3, NNTP), and
several are in the planning stages. [One particularly interesting
side-project is an integrated asynchronous mSQL client.]
I am distributing Medusa under a 'free for non-commercial use'
license. Python source code is included.
Medusa has not yet been placed under a 'real-world' load, such an
environment is difficult to simulate. I am very interested in all
feedback about Medusa's performance, but especially so for high-load
situations (greater than 1 hit/sec).
More information is available at:
http://www.nightmare.com/medusa/
- -Sam
rushing@nightmare.com
-----BEGIN PGP SIGNATURE-----
Version: 2.6.2
Comment: Processed by Mailcrypt 3.4, an Emacs/PGP interface
iQCVAwUBMv/2OGys8OGgJmJxAQGUyAQAgL+LMgz1nVEDzYvx6NROcRU5oMSNMQPG
4aUdZ3lMthAgfCrQ9bipVMtR2ouUeluC8qlZeaaeoT+mnMi5svoURZpAfCv0tIc4
CYfO6Ih8B1xaXaGC/ygRgIqN2alUXmyZmVoVxtAj6uFczP27i8QQ/3mSWLv7OskL
9Qg6fNo2Zg4=
=3anM
-----END PGP SIGNATURE-----
This is a major update of Medusa. Almost everything has been
rewritten; the web server has been rewritten from scratch
twice. [ain't Python nice. 8^) ]
Here is a quick rundown of the improvements and new features:
HTTP Server:
Good support for persistent connections (HTTP/1.0 _and_ 1.1)
required a redesign of the request-handling mechanism. Requests are
now separate objects, and as much as possible differences between
the two protocol versions have been hidden from the user. [I should
say 'extender'].
HTTP/1.0 persistence is provided via the 'Connection: Keep-Alive'
mechanism supported by most currently available browsers.
HTTP/1.1 default persistence is implemented, along with various
features to support pipelining, including the 'chunked' transfer
encoding. [which is enabled by default in situations where the
extension is providing data dynamically]
[a note on a change in terminology: 'extensions' are now 'handlers']
Sample request handlers for the basic authentication scheme and the
PUT method are provided, along with a demonstration of a
'publishing' interface - this allows the server to support updatable
web pages.
A sample handler for unix user directories (the familiar '~user/'
URI) is included.
The new handler mechanism is quite general and powerful: It is easy
to write handlers that 'wrap' other handlers to provide combined
behavior. (For example, the 'publishing' demo wraps an
authentication handler around a PUT handler to provide authentication
for page updates).
Sophisticated Logging:
A few different logging objects are implemented:
An implementation of the Unix 'syslog' is included: it understands
the syslog protocol natively, and can thus log asynchronously to
either the local host, or to a remote host. This means it will also
work on non-unix platforms.
A 'socket' logger: send log info directly to a network connection.
A 'file' logger: log into any file object.
The above logging objects can be combined using the 'multi' logger.
DNS resolver:
A simple asynchronous caching DNS resolver is included - this
piggybacks off of a known name server. The resolver is currently
used to provide resolved hostnames for logging for the other
servers.
'Monitor' server:
This is a 'remote interpreter' server. Server administrators can
use this to get directly at the server WHILE IT IS RUNNING. I use
it to upgrade pieces of the server, or to install or remove handlers
`on the fly'. It is optionally protected by an MD5-based challenge-
response authentication, and by a stream-cipher encryption.
Encryption is available if you have access to the Python
Cryptography Toolkit, or something like it.
It's been difficult to convey the power and convenience of this
server: Anything that can be done from a python prompt can be done
while connected to it. It is also a tremendous aid when debugging
servers or extensions.
'Chat' server:
For reasons I won't pretend to understand, servers supporting
IRC-like 'chat' rooms of various types are quite popular in the
commercial world: This is a quick demonstration of how to write such
a server, and how to integrate it with medusa. This simple example
could easily be integrated into the web server to provide a
web-navigated, web-administered chat server.
That was the good news, here's the 'bad' :
==================================================
I've ditched the command-line interface for the time being. In order
to make it sufficiently powerful I found myself inventing yet another
'configuration language'. This seemed rather silly given python's
innate ability to handle such things. So now medusa is driven by a
user 'script'. A sample script is provided with judicious commentary.
Probably the most glaring omission in Medusa is the lack of CGI support.
I have dropped this for several reasons:
1) it is unreasonably difficult to support in a portable fashion
2) it is clearly a hack in the worst sense of the word; insecure and
inefficient. why not just use inetd?
3) much more powerful things can be done within the framework of
Medusa with much less effort.
4) CGI can easily be provided using Apache or any other web server
by running it in 'tandem' with medusa [i.e., on another port].
If someone desperately needs the CGI support, I can explain how to
integrate it with Medusa - the code is not very different from the
module included with the Python library.
==================================================
Medusa is provided free of charge for non-commercial use. Commercial
use requires a license (US$200). Source code is included (it's called
"The Documentation"), and users are encouraged to develop and
distribute their own extensions under their own terms.
Note that the core of Medusa is an asynchronous socket library that is
distributed under a traditional Python copyright, so unless you're
plugging directly into medusa's higher-level servers, and doing so for
commercial purposes, you needn't worry about me getting Run Over By a
Bus.
More information is available from:
http://www.nightmare.com/medusa/
Enjoy!
-Sam Rushing
rushing@nightmare.com
Medusa Installation.
---------------------------------------------------------------------------
Medusa is distributed as Python source code. Before using Medusa, you
will need to install Python on your machine.
The Python interpreter, source, documentation, etc... may be obtained
from
http://www.python.org/
Versions for many different operating systems are available, including
Unix, 32-bit Windows (Win95 & NT), Macintosh, VMS, etc... Medusa has
been tested on Unix and Windows, though it may very well work on other
operating systems.
You don't need to learn Python In order to use Medusa. However, if
you are interested in extending Medusa, you should spend the few
minutes that it will take you to go through the Python Tutorial:
http://www.python.org/doc/tut/
Python is remarkably easy to learn, and I guarantee that it will be
worth your while. After only about thirty minutes, you should know
enough about Python to be able to start customizing and extending
Medusa.
---------------------------------------------------------------------------
Once you have installed Python, you are ready to configure Medusa.
Medusa does not use configuration files per se, or even command-line
arguments. It is configured via a 'startup script', written in
Python. A sample is provided in 'start_medusa.py'. You should make
a copy of this.
The sample startup script is heavily commented. Many (though not all)
of Medusa's features are made available in the startup script. You may
modify this script by commenting out portions, adding or changing
parameters, etc...
Here is a section from the front of 'start_medusa.py'
| if len(sys.argv) > 1:
| # process a few convenient arguments
| [HOSTNAME, IP_ADDRESS, PUBLISHING_ROOT] = sys.argv[1:]
| else:
| HOSTNAME = 'www.nightmare.com'
| # This is the IP address of the network interface you want
| # your servers to be visible from. This can be changed to ''
| # to listen on all interfaces.
| IP_ADDRESS = '205.160.176.5'
|
| # Root of the http and ftp server's published filesystems.
| PUBLISHING_ROOT = '/home/www'
|
| HTTP_PORT = 8080 # The standard port is 80
| FTP_PORT = 8021 # The standard port is 21
| CHAT_PORT = 8888
| MONITOR_PORT = 9999
If you are familiar with the process of configuring a web or ftp
server, then these parameters should be fairly obvious: You will
need to change the hostname, IP address, and port numbers for the
server that you wish to run.
---------------------------------------------------------------------------
A Medusa configuration does not need to be this complex -
start_medusa.py is bloated somewhat by its attempt to include most of
the available features. Another example startup script is available
in the 'demo' subdirectory.
---------------------------------------------------------------------------
Once you have made your own startup script, you may simply invoke
the Python interpreter on it:
[unix]
$ python start_medusa.py &
[win32]
d:\medusa\> start python start_medusa.py
Medusa (V3.8) started at Sat Jan 24 01:43:21 1998
Hostname: ziggurat.nightmare.com
Port:8080
<Unix User Directory Handler at 080e9c08 [~user/public_html, 0 filesystems loaded]>
FTP server started at Sat Jan 24 01:43:21 1998
Authorizer:<test_authorizer instance at 80e8938>
Hostname: ziggurat.nightmare.com
Port: 21
192.168.200.40:1450 - - [24/Jan/1998:07:43:23 -0500] "GET /status HTTP/1.0" 200 1638
192.168.200.40:1451 - - [24/Jan/1998:07:43:23 -0500] "GET /status/medusa.gif HTTP/1.0" 200 1084
---------------------------------------------------------------------------
Documentation for specific Medusa servers is somewhat lacking, mostly
because development continues to move rapidly. The best place to go
to understand Medusa and how it works is to dive into the source code.
Many of the more interesting features, especially the latest, are
described only in the source code.
Some notes on data flow in Medusa are available in
'docs/data_flow.html'
I encourage you to examine and experiment with Medusa. You may
develop your own extensions, handlers, etc... I appreciate feedback
from users and developers on desired features, and of course
descriptions of your most splendid hacks.
Medusa's design is somewhat novel compared to most other network
servers. In fact, the asynchronous i/o capability seems to have
attracted the majority of paying customers, who are often more
interested in harnessing the i/o framework than the actual web and ftp
servers.
-Sam Rushing
rushing@nightmare.com
Nightmare Software,
January 1998
# -*- Mode: Makefile; tab-width: 4 -*-
all: dist
dist: clean
python util/name_dist.py
clean:
find ./ -name '*.pyc' -exec rm {} \;
find ./ -name '*~' -exec rm {} \;
# Make medusa into a package
__version__='$Revision: 1.7 $'[11:-2]
# -*- Mode: Python; tab-width: 4 -*-
# $Id: asynchat.py,v 1.16 2001/04/25 19:07:29 andreas Exp $
# Author: Sam Rushing <rushing@nightmare.com>
# ======================================================================
# Copyright 1996 by Sam Rushing
#
# All Rights Reserved
#
# Permission to use, copy, modify, and distribute this software and
# its documentation for any purpose and without fee is hereby
# granted, provided that the above copyright notice appear in all
# copies and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of Sam
# Rushing not be used in advertising or publicity pertaining to
# distribution of the software without specific, written prior
# permission.
#
# SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
# NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# ======================================================================
"""A class supporting chat-style (command/response) protocols.
This class adds support for 'chat' style protocols - where one side
sends a 'command', and the other sends a response (examples would be
the common internet protocols - smtp, nntp, ftp, etc..).
The handle_read() method looks at the input stream for the current
'terminator' (usually '\r\n' for single-line responses, '\r\n.\r\n'
for multi-line output), calling self.found_terminator() on its
receipt.
for example:
Say you build an async nntp client using this class. At the start
of the connection, you'll have self.terminator set to '\r\n', in
order to process the single-line greeting. Just before issuing a
'LIST' command you'll set it to '\r\n.\r\n'. The output of the LIST
command will be accumulated (using your own 'collect_incoming_data'
method) up to the terminator, and then control will be returned to
you - by calling your self.found_terminator() method.
"""
import socket
import asyncore
import string
class async_chat (asyncore.dispatcher):
"""This is an abstract class. You must derive from this class, and add
the two methods collect_incoming_data() and found_terminator()"""
# these are overridable defaults
ac_in_buffer_size = 4096
ac_out_buffer_size = 4096
def __init__ (self, conn=None):
self.ac_in_buffer = ''
self.ac_out_buffer = ''
self.producer_fifo = fifo()
asyncore.dispatcher.__init__ (self, conn)
def set_terminator (self, term):
"Set the input delimiter. Can be a fixed string of any length, an integer, or None"
self.terminator = term
def get_terminator (self):
return self.terminator
# grab some more data from the socket,
# throw it to the collector method,
# check for the terminator,
# if found, transition to the next state.
def handle_read (self):
try:
data = self.recv (self.ac_in_buffer_size)
except socket.error, why:
self.handle_error()
return
self.ac_in_buffer = self.ac_in_buffer + data
# Continue to search for self.terminator in self.ac_in_buffer,
# while calling self.collect_incoming_data. The while loop
# is necessary because we might read several data+terminator
# combos with a single recv(1024).
while self.ac_in_buffer:
lb = len(self.ac_in_buffer)
terminator = self.get_terminator()
if terminator is None:
# no terminator, collect it all
self.collect_incoming_data (self.ac_in_buffer)
self.ac_in_buffer = ''
elif type(terminator) == type(0):
# numeric terminator
n = terminator
if lb < n:
self.collect_incoming_data (self.ac_in_buffer)
self.ac_in_buffer = ''
self.terminator = self.terminator - lb
else:
self.collect_incoming_data (self.ac_in_buffer[:n])
self.ac_in_buffer = self.ac_in_buffer[n:]
self.terminator = 0
self.found_terminator()
else:
# 3 cases:
# 1) end of buffer matches terminator exactly:
# collect data, transition
# 2) end of buffer matches some prefix:
# collect data to the prefix
# 3) end of buffer does not match any prefix:
# collect data
terminator_len = len(terminator)
index = string.find (self.ac_in_buffer, terminator)
if index != -1:
# we found the terminator
if index > 0:
# don't bother reporting the empty string (source of subtle bugs)
self.collect_incoming_data (self.ac_in_buffer[:index])
self.ac_in_buffer = self.ac_in_buffer[index+terminator_len:]
# This does the Right Thing if the terminator is changed here.
self.found_terminator()
else:
# check for a prefix of the terminator
index = find_prefix_at_end (self.ac_in_buffer, terminator)
if index:
if index != lb:
# we found a prefix, collect up to the prefix
self.collect_incoming_data (self.ac_in_buffer[:-index])
self.ac_in_buffer = self.ac_in_buffer[-index:]
break
else:
# no prefix, collect it all
self.collect_incoming_data (self.ac_in_buffer)
self.ac_in_buffer = ''
def handle_write (self):
self.initiate_send ()
def handle_close (self):
self.close()
def push (self, data):
self.producer_fifo.push (simple_producer (data))
self.initiate_send()
def push_with_producer (self, producer):
self.producer_fifo.push (producer)
self.initiate_send()
def readable (self):
"predicate for inclusion in the readable for select()"
return (len(self.ac_in_buffer) <= self.ac_in_buffer_size)
def writable (self):
"predicate for inclusion in the writable for select()"
# return len(self.ac_out_buffer) or len(self.producer_fifo) or (not self.connected)
# this is about twice as fast, though not as clear.
return not (
(self.ac_out_buffer is '') and
self.producer_fifo.is_empty() and
self.connected
)
def close_when_done (self):
"automatically close this channel once the outgoing queue is empty"
self.producer_fifo.push (None)
# refill the outgoing buffer by calling the more() method
# of the first producer in the queue
def refill_buffer (self):
_string_type = type('')
while 1:
if len(self.producer_fifo):
p = self.producer_fifo.first()
# a 'None' in the producer fifo is a sentinel,
# telling us to close the channel.
if p is None:
if not self.ac_out_buffer:
self.producer_fifo.pop()
self.close()
return
elif type(p) is _string_type:
self.producer_fifo.pop()
self.ac_out_buffer = self.ac_out_buffer + p
return
data = p.more()
if data:
self.ac_out_buffer = self.ac_out_buffer + data
return
else:
self.producer_fifo.pop()
else:
return
def initiate_send (self):
obs = self.ac_out_buffer_size
# try to refill the buffer
if (len (self.ac_out_buffer) < obs):
self.refill_buffer()
if self.ac_out_buffer and self.connected:
# try to send the buffer
try:
num_sent = self.send (self.ac_out_buffer[:obs])
if num_sent:
self.ac_out_buffer = self.ac_out_buffer[num_sent:]
except socket.error, why:
self.handle_error()
return
def discard_buffers (self):
# Emergencies only!
self.ac_in_buffer = ''
self.ac_out_buffer = ''
while self.producer_fifo:
self.producer_fifo.pop()
class simple_producer:
def __init__ (self, data, buffer_size=512):
self.data = data
self.buffer_size = buffer_size
def more (self):
if len (self.data) > self.buffer_size:
result = self.data[:self.buffer_size]
self.data = self.data[self.buffer_size:]
return result
else:
result = self.data
self.data = ''
return result
class fifo:
def __init__ (self, list=None):
if not list:
self.list = []
else:
self.list = list
def __len__ (self):
return len(self.list)
def is_empty (self):
return self.list == []
def first (self):
return self.list[0]
def push (self, data):
self.list.append (data)
def pop (self):
if self.list:
result = self.list[0]
del self.list[0]
return (1, result)
else:
return (0, None)
# Given 'haystack', see if any prefix of 'needle' is at its end. This
# assumes an exact match has already been checked. Return the number of
# characters matched.
# for example:
# f_p_a_e ("qwerty\r", "\r\n") => 1
# f_p_a_e ("qwertydkjf", "\r\n") => 0
# f_p_a_e ("qwerty\r\n", "\r\n") => <undefined>
# this could maybe be made faster with a computed regex?
# [answer: no; circa Python-2.0, Jan 2001]
# new python: 28961/s
# old python: 18307/s
# re: 12820/s
# regex: 14035/s
def find_prefix_at_end (haystack, needle):
l = len(needle) - 1
while l and not haystack.endswith(needle[:l]):
l -= 1
return l
This diff is collapsed.
# -*- Mode: Python; tab-width: 4 -*-
#
# Author: Sam Rushing <rushing@nightmare.com>
# Copyright 1996-2000 by Sam Rushing
# All Rights Reserved.
#
RCS_ID = '$Id: auth_handler.py,v 1.2 2001/04/25 19:07:30 andreas Exp $'
# support for 'basic' authenticaion.
import base64
import md5
import re
import string
import time
import counter
import default_handler
get_header = default_handler.get_header
import http_server
import producers
# This is a 'handler' that wraps an authorization method
# around access to the resources normally served up by
# another handler.
# does anyone support digest authentication? (rfc2069)
class auth_handler:
def __init__ (self, dict, handler, realm='default'):
self.authorizer = dictionary_authorizer (dict)
self.handler = handler
self.realm = realm
self.pass_count = counter.counter()
self.fail_count = counter.counter()
def match (self, request):
# by default, use the given handler's matcher
return self.handler.match (request)
def handle_request (self, request):
# authorize a request before handling it...
scheme = get_header (AUTHORIZATION, request.header)
if scheme:
scheme = string.lower (scheme)
if scheme == 'basic':
cookie = AUTHORIZATION.group(2)
try:
decoded = base64.decodestring (cookie)
except:
print 'malformed authorization info <%s>' % cookie
request.error (400)
return
auth_info = string.split (decoded, ':')
if self.authorizer.authorize (auth_info):
self.pass_count.increment()
request.auth_info = auth_info
self.handler.handle_request (request)
else:
self.handle_unauthorized (request)
#elif scheme == 'digest':
# print 'digest: ',AUTHORIZATION.group(2)
else:
print 'unknown/unsupported auth method: %s' % scheme
self.handle_unauthorized()
else:
# list both? prefer one or the other?
# you could also use a 'nonce' here. [see below]
#auth = 'Basic realm="%s" Digest realm="%s"' % (self.realm, self.realm)
#nonce = self.make_nonce (request)
#auth = 'Digest realm="%s" nonce="%s"' % (self.realm, nonce)
#request['WWW-Authenticate'] = auth
#print 'sending header: %s' % request['WWW-Authenticate']
self.handle_unauthorized (request)
def handle_unauthorized (self, request):
# We are now going to receive data that we want to ignore.
# to ignore the file data we're not interested in.
self.fail_count.increment()
request.channel.set_terminator (None)
request['Connection'] = 'close'
request['WWW-Authenticate'] = 'Basic realm="%s"' % self.realm
request.error (401)
def make_nonce (self, request):
"A digest-authentication <nonce>, constructed as suggested in RFC 2069"
ip = request.channel.server.ip
now = str (long (time.time()))[:-1]
private_key = str (id (self))
nonce = string.join ([ip, now, private_key], ':')
return self.apply_hash (nonce)
def apply_hash (self, s):
"Apply MD5 to a string <s>, then wrap it in base64 encoding."
m = md5.new()
m.update (s)
d = m.digest()
# base64.encodestring tacks on an extra linefeed.
return base64.encodestring (d)[:-1]
def status (self):
# Thanks to mwm@contessa.phone.net (Mike Meyer)
r = [
producers.simple_producer (
'<li>Authorization Extension : '
'<b>Unauthorized requests:</b> %s<ul>' % self.fail_count
)
]
if hasattr (self.handler, 'status'):
r.append (self.handler.status())
r.append (
producers.simple_producer ('</ul>')
)
return producers.composite_producer (
http_server.fifo (r)
)
class dictionary_authorizer:
def __init__ (self, dict):
self.dict = dict
def authorize (self, auth_info):
[username, password] = auth_info
if (self.dict.has_key (username)) and (self.dict[username] == password):
return 1
else:
return 0
AUTHORIZATION = re.compile (
# scheme challenge
'Authorization: ([^ ]+) (.*)',
re.IGNORECASE
)
# -*- Mode: Python; tab-width: 4 -*-
#
# Author: Sam Rushing <rushing@nightmare.com>
# Copyright 1997-2000 by Sam Rushing
# All Rights Reserved.
#
RCS_ID = '$Id: chat_server.py,v 1.2 2001/04/25 19:07:30 andreas Exp $'
import string
VERSION = string.split(RCS_ID)[2]
import socket
import asyncore
import asynchat
import status_handler
class chat_channel (asynchat.async_chat):
def __init__ (self, server, sock, addr):
asynchat.async_chat.__init__ (self, sock)
self.server = server
self.addr = addr
self.set_terminator ('\r\n')
self.data = ''
self.nick = None
self.push ('nickname?: ')
def collect_incoming_data (self, data):
self.data = self.data + data
def found_terminator (self):
line = self.data
self.data = ''
if self.nick is None:
self.nick = string.split (line)[0]
if not self.nick:
self.nick = None
self.push ('huh? gimmee a nickname: ')
else:
self.greet()
else:
if not line:
pass
elif line[0] != '/':
self.server.push_line (self, line)
else:
self.handle_command (line)
def greet (self):
self.push ('Hello, %s\r\n' % self.nick)
num_channels = len(self.server.channels)-1
if num_channels == 0:
self.push ('[Kinda lonely in here... you\'re the only caller!]\r\n')
else:
self.push ('[There are %d other callers]\r\n' % (len(self.server.channels)-1))
nicks = map (lambda x: x.get_nick(), self.server.channels.keys())
self.push (string.join (nicks, '\r\n ') + '\r\n')
self.server.push_line (self, '[joined]')
def handle_command (self, command):
import types
command_line = string.split(command)
name = 'cmd_%s' % command_line[0][1:]
if hasattr (self, name):
# make sure it's a method...
method = getattr (self, name)
if type(method) == type(self.handle_command):
method (command_line[1:])
else:
self.push ('unknown command: %s' % command_line[0])
def cmd_quit (self, args):
self.server.push_line (self, '[left]')
self.push ('Goodbye!\r\n')
self.close_when_done()
# alias for '/quit' - '/q'
cmd_q = cmd_quit
def push_line (self, nick, line):
self.push ('%s: %s\r\n' % (nick, line))
def handle_close (self):
self.close()
def close (self):
del self.server.channels[self]
asynchat.async_chat.close (self)
def get_nick (self):
if self.nick is not None:
return self.nick
else:
return 'Unknown'
class chat_server (asyncore.dispatcher):
SERVER_IDENT = 'Chat Server (V%s)' % VERSION
channel_class = chat_channel
spy = 1
def __init__ (self, ip='', port=8518):
self.port = port
self.create_socket (socket.AF_INET, socket.SOCK_STREAM)
self.bind ((ip, port))
print '%s started on port %d' % (self.SERVER_IDENT, port)
self.listen (5)
self.channels = {}
self.count = 0
def handle_accept (self):
conn, addr = self.accept()
self.count = self.count + 1
print 'client #%d - %s:%d' % (self.count, addr[0], addr[1])
self.channels[self.channel_class (self, conn, addr)] = 1
def push_line (self, from_channel, line):
nick = from_channel.get_nick()
if self.spy:
print '%s: %s' % (nick, line)
for c in self.channels.keys():
if c is not from_channel:
c.push ('%s: %s\r\n' % (nick, line))
def status (self):
lines = [
'<h2>%s</h2>' % self.SERVER_IDENT,
'<br>Listening on Port: %d' % self.port,
'<br><b>Total Sessions:</b> %d' % self.count,
'<br><b>Current Sessions:</b> %d' % (len(self.channels))
]
return status_handler.lines_producer (lines)
def writable (self):
return 0
if __name__ == '__main__':
import sys
if len(sys.argv) > 1:
port = string.atoi (sys.argv[1])
else:
port = 8518
s = chat_server ('', port)
asyncore.loop()
# -*- Mode: Python; tab-width: 4 -*-
# [ based on async_lib/consumer.py:function_chain.py ]
class continuation:
'Package up a continuation as an object.'
'Also a convenient place to store state.'
def __init__ (self, fun, *args):
self.funs = [(fun, args)]
def __call__ (self, *args):
fun, init_args = self.funs[0]
self.funs = self.funs[1:]
if self.funs:
apply (fun, (self,)+ init_args + args)
else:
apply (fun, init_args + args)
def chain (self, fun, *args):
self.funs.insert (0, (fun, args))
return self
def abort (self, *args):
fun, init_args = self.funs[-1]
apply (fun, init_args + args)
"""Bobo handler module
For use with medusa & python object publisher
copyright 1997 amos latteier
(code based on script_handler.py)
Use:
here is a sample fragment from a script to start medusa:
...
hs = http_server.http_server (IP_ADDRESS, HTTP_PORT, rs, lg)
...
sys.path.insert(0,'c:\\windows\\desktop\\medusa2\\www\\bobo')
import bobotest
bh=bobo_handler.bobo_handler(bobotest, debug=1) #create bobo handler
hs.install_handler(bh) #install handler in http server
...
This will install the bobo handler on the http server and give you
access to the bobotest module via urls like this:
http://myserver.com/bobotest/blah/blah
bobo_handler initalization options:
* debug: If the debug flag is set then the published module will be reloaded
whenever its source is changed on disk. This is very handy for developement.
* uri_base: If the uri_base isn't specified it defaults to /<module_name>
"""
__version__="1.03"
import cgi_module_publisher
import sys
import regex
import string
import os
try:
from cStringIO import StringIO
except:
from StringIO import StringIO
try:
import thread
mutex = thread.allocate_lock()
THREADS = 1
except:
THREADS = 0
import counter
import default_handler
import producers
split_path = default_handler.split_path
unquote = default_handler.unquote
get_header = default_handler.get_header
CONTENT_LENGTH = regex.compile ('Content-Length: \([0-9]+\)', regex.casefold)
# maps request headers to environment variables
#
header2env={'Content-Length' : 'CONTENT_LENGTH',
'Content-Type' : 'CONTENT_TYPE',
'Referer' : 'HTTP_REFERER',
'User-Agent' : 'HTTP_USER_AGENT',
'Accept' : 'HTTP_ACCEPT',
'Accept-Charset' : 'HTTP_ACCEPT_CHARSET',
'Accept-Language' : 'HTTP_ACCEPT_LANGUAGE',
'Host' : None,
'Connection' : 'CONNECTION_TYPE',
'Pragma' : None,
'Authorization' : 'HTTP_AUTHORIZATION',
'Cookie' : 'HTTP_COOKIE',
}
# convert keys to lower case for case-insensitive matching
#
for (key,value) in header2env.items():
del header2env[key]
key=string.lower(key)
header2env[key]=value
class bobo_handler:
"publishes a module via bobo"
def __init__ (self, module, uri_base=None, debug=None):
self.module = module
self.debug = debug
if self.debug:
self.last_reload=self.module_mtime()
self.hits = counter.counter()
# if uri_base is unspecified, assume it
# starts with the published module name
#
if not uri_base:
uri_base="/%s" % module.__name__
elif uri_base[-1]=="/": # kill possible trailing /
uri_base=uri_base[:-1]
self.uri_base=uri_base
uri_regex='%s.*' % self.uri_base
self.uri_regex = regex.compile(uri_regex)
def match (self, request):
uri = request.uri
if self.uri_regex.match (uri) == len(uri):
return 1
else:
return 0
def handle_request (self, request):
[path, params, query, fragment] = split_path (request.uri)
while path and path[0] == '/':
path = path[1:]
if '%' in path:
path = unquote (path)
self.hits.increment()
if query:
# cgi_publisher_module doesn't want the leading '?'
query = query[1:]
self.env = {}
self.env['REQUEST_METHOD'] = string.upper(request.command)
self.env['SERVER_PORT'] = '%s' % request.channel.server.port
self.env['SERVER_NAME'] = request.channel.server.server_name
self.env['SERVER_SOFTWARE'] = request['Server']
self.env['SCRIPT_NAME'] = self.uri_base # are script_name and path_info ok?
self.env['QUERY_STRING'] = query
try:
path_info=string.split(path,self.uri_base[1:],1)[1]
except:
path_info=''
self.env['PATH_INFO'] = path_info
self.env['GATEWAY_INTERFACE']='CGI/1.1' # what should this really be?
self.env['REMOTE_ADDR']=request.channel.addr[0]
self.env['REMOTE_HOST']=request.channel.addr[0] #what should this be?
for header in request.header:
[key,value]=string.split(header,": ",1)
key=string.lower(key)
if header2env.has_key(key):
if header2env[key]:
self.env[header2env[key]]=value
else:
key='HTTP_'+string.upper(string.join(string.split(key,"-"),"_"))
self.env[key]=value
# remove empty environment variables
#
for key in self.env.keys():
if self.env[key]=="" or self.env[key]==None:
del self.env[key]
if request.command in ["post","put"]:
request.collector=input_collector(self,request)
request.channel.set_terminator (None)
else:
sin=StringIO('')
self.continue_request(sin,request)
def continue_request(self,sin,request):
"continue handling request now that we have the stdin"
# if we have threads spawn a new one to publish the module
# so we dont freeze the server while publishing.
if THREADS:
thread.start_new_thread(self._continue_request,(sin,request))
else:
self._continue_request(sin,request)
def _continue_request(self,sin,request):
"continue handling request now that we have the stdin"
sout = StringIO()
serr = StringIO()
if self.debug:
m_time=self.module_mtime()
if m_time> self.last_reload:
reload(self.module)
self.last_reload=m_time
if THREADS:
mutex.acquire()
cgi_module_publisher.publish_module(
self.module.__name__,
stdin=sin,
stdout=sout,
stderr=serr,
environ=self.env,
#debug=1
)
if THREADS:
mutex.release()
if serr.tell():
request.log(serr.getvalue())
response=sout
response=response.getvalue()
# set response headers
[headers,html]=string.split(response,"\n\n",1)
headers=string.split(headers,"\n")
for line in headers:
[header, header_value]=string.split(line,": ",1)
if header=="Status":
[code,message]=string.split(header_value," ",1)
request.reply_code=string.atoi(code)
else:
request[header]=header_value
request.push(html)
request.done()
def module_mtime(self):
"returns the last modified date for a given module's source file"
return os.stat(self.module.__file__)[8]
def status (self):
return producers.simple_producer (
'<li>Bobo Handler'
+ '<ul>'
+ ' <li><b>Hits:</b> %d' % int(self.hits)
+ '</ul>'
)
class input_collector:
"gathers input for put and post requests"
def __init__ (self, handler, request):
self.handler = handler
self.request = request
self.data = StringIO()
# make sure there's a content-length header
self.cl = get_header (CONTENT_LENGTH, request.header)
if not self.cl:
request.error(411)
return
else:
self.cl = string.atoi(self.cl)
def collect_incoming_data (self, data):
self.data.write(data)
if self.data.tell() >= self.cl:
self.data.seek(0)
h=self.handler
r=self.request
# set the terminator back to the default
self.request.channel.set_terminator ('\r\n\r\n')
del self.handler
del self.request
h.continue_request(self.data,r)
# -*- Mode: Python; tab-width: 4 -*-
import time
def main (env, stdin, stdout):
# write out the response
stdout.write ("HTTP/1.0 200 OK\r\n")
# write out a header
stdout.write ("Content-Type: text/html\r\n")
stdout.write ("\r\n")
stdout.write ("<html><body>")
for i in range (10,0,-1):
stdout.write ("<br> <b>tick</b> %d\r\n" % i)
stdout.flush()
time.sleep (3)
stdout.write ("</body></html>\r\n")
# -*- Mode: Python; tab-width: 4 -*-
# It is tempting to add an __int__ method to this class, but it's not
# a good idea. This class tries to gracefully handle integer
# overflow, and to hide this detail from both the programmer and the
# user. Note that the __str__ method can be relied on for printing out
# the value of a counter:
#
# >>> print 'Total Client: %s' % self.total_clients
#
# If you need to do arithmetic with the value, then use the 'as_long'
# method, the use of long arithmetic is a reminder that the counter
# will overflow.
class counter:
"general-purpose counter"
def __init__ (self, initial_value=0):
self.value = initial_value
def increment (self, delta=1):
result = self.value
try:
self.value = self.value + delta
except OverflowError:
self.value = long(self.value) + delta
return result
def decrement (self, delta=1):
result = self.value
try:
self.value = self.value - delta
except OverflowError:
self.value = long(self.value) - delta
return result
def as_long (self):
return long(self.value)
def __nonzero__ (self):
return self.value != 0
def __repr__ (self):
return '<counter value=%s at %x>' % (self.value, id(self))
def __str__ (self):
return str(long(self.value))[:-1]
# -*- Mode: Python; tab-width: 4 -*-
#
# Author: Sam Rushing <rushing@nightmare.com>
# Copyright 1997 by Sam Rushing
# All Rights Reserved.
#
RCS_ID = '$Id: default_handler.py,v 1.6 2001/04/25 19:07:30 andreas Exp $'
# standard python modules
import os
import re
import posixpath
import stat
import string
import time
# medusa modules
import http_date
import http_server
import mime_type_table
import status_handler
import producers
unquote = http_server.unquote
# This is the 'default' handler. it implements the base set of
# features expected of a simple file-delivering HTTP server. file
# services are provided through a 'filesystem' object, the very same
# one used by the FTP server.
#
# You can replace or modify this handler if you want a non-standard
# HTTP server. You can also derive your own handler classes from
# it.
#
# support for handling POST requests is available in the derived
# class <default_with_post_handler>, defined below.
#
from counter import counter
class default_handler:
valid_commands = ['get', 'head']
IDENT = 'Default HTTP Request Handler'
# Pathnames that are tried when a URI resolves to a directory name
directory_defaults = [
'index.html',
'default.html'
]
default_file_producer = producers.file_producer
def __init__ (self, filesystem):
self.filesystem = filesystem
# count total hits
self.hit_counter = counter()
# count file deliveries
self.file_counter = counter()
# count cache hits
self.cache_counter = counter()
hit_counter = 0
def __repr__ (self):
return '<%s (%s hits) at %x>' % (
self.IDENT,
self.hit_counter,
id (self)
)
# always match, since this is a default
def match (self, request):
return 1
# handle a file request, with caching.
def handle_request (self, request):
if request.command not in self.valid_commands:
request.error (400) # bad request
return
self.hit_counter.increment()
path, params, query, fragment = request.split_uri()
if '%' in path:
path = unquote (path)
# strip off all leading slashes
while path and path[0] == '/':
path = path[1:]
if self.filesystem.isdir (path):
if path and path[-1] != '/':
request['Location'] = 'http://%s/%s/' % (
request.channel.server.server_name,
path
)
request.error (301)
return
# we could also generate a directory listing here,
# may want to move this into another method for that
# purpose
found = 0
if path and path[-1] != '/':
path = path + '/'
for default in self.directory_defaults:
p = path + default
if self.filesystem.isfile (p):
path = p
found = 1
break
if not found:
request.error (404) # Not Found
return
elif not self.filesystem.isfile (path):
request.error (404) # Not Found
return
file_length = self.filesystem.stat (path)[stat.ST_SIZE]
ims = get_header_match (IF_MODIFIED_SINCE, request.header)
length_match = 1
if ims:
length = ims.group (4)
if length:
try:
length = string.atoi (length)
if length != file_length:
length_match = 0
except:
pass
ims_date = 0
if ims:
ims_date = http_date.parse_http_date (ims.group (1))
try:
mtime = self.filesystem.stat (path)[stat.ST_MTIME]
except:
request.error (404)
return
if length_match and ims_date:
if mtime <= ims_date:
request.reply_code = 304
request.done()
self.cache_counter.increment()
return
try:
file = self.filesystem.open (path, 'rb')
except IOError:
request.error (404)
return
request['Last-Modified'] = http_date.build_http_date (mtime)
request['Content-Length'] = file_length
self.set_content_type (path, request)
if request.command == 'get':
request.push (self.default_file_producer (file))
self.file_counter.increment()
request.done()
def set_content_type (self, path, request):
ext = string.lower (get_extension (path))
if mime_type_table.content_type_map.has_key (ext):
request['Content-Type'] = mime_type_table.content_type_map[ext]
else:
# TODO: test a chunk off the front of the file for 8-bit
# characters, and use application/octet-stream instead.
request['Content-Type'] = 'text/plain'
def status (self):
return producers.simple_producer (
'<li>%s' % status_handler.html_repr (self)
+ '<ul>'
+ ' <li><b>Total Hits:</b> %s' % self.hit_counter
+ ' <li><b>Files Delivered:</b> %s' % self.file_counter
+ ' <li><b>Cache Hits:</b> %s' % self.cache_counter
+ '</ul>'
)
# HTTP/1.0 doesn't say anything about the "; length=nnnn" addition
# to this header. I suppose it's purpose is to avoid the overhead
# of parsing dates...
IF_MODIFIED_SINCE = re.compile (
'If-Modified-Since: ([^;]+)((; length=([0-9]+)$)|$)',
re.IGNORECASE
)
USER_AGENT = re.compile ('User-Agent: (.*)', re.IGNORECASE)
CONTENT_TYPE = re.compile (
r'Content-Type: ([^;]+)((; boundary=([A-Za-z0-9\'\(\)+_,./:=?-]+)$)|$)',
re.IGNORECASE
)
get_header = http_server.get_header
get_header_match = http_server.get_header_match
def get_extension (path):
dirsep = string.rfind (path, '/')
dotsep = string.rfind (path, '.')
if dotsep > dirsep:
return path[dotsep+1:]
else:
return ''
# -*- Mode: Python; tab-width: 4 -*-
# Demonstrates use of the auth and put handlers to support publishing
# web pages via HTTP. This is supported by Netscape Communicator and
# probably the Internet Exploder.
# It is also possible to set up the ftp server to do essentially the
# same thing.
# Security Note: Using HTTP with the 'Basic' authentication scheme is
# only slightly more secure than using FTP: both techniques involve
# sending a unencrypted password of the network (http basic auth
# base64-encodes the username and password). The 'Digest' scheme is
# much more secure, but not widely supported yet. <sigh>
import asyncore
import default_handler
import http_server
import put_handler
import auth_handler
import filesys
# For this demo, we'll just use a dictionary of usernames/passwords.
# You can of course use anything that supports the mapping interface,
# and it would be pretty easy to set this up to use the crypt module
# on unix.
users = { 'mozart' : 'jupiter', 'beethoven' : 'pastoral' }
# The filesystem we will be giving access to
fs = filesys.os_filesystem ('/home/medusa')
# The 'default' handler - delivers files for the HTTP GET method.
dh = default_handler.default_handler (fs)
# Supports the HTTP PUT method...
ph = put_handler.put_handler (fs, '/.*')
# ... but be sure to wrap it with an auth handler:
ah = auth_handler.auth_handler (users, ph)
# Create a Web Server
hs = http_server.http_server (ip='', port=8080)
# install the handlers we created:
hs.install_handler (dh) # for GET
hs.install_handler (ah) # for PUT
asyncore.loop()
<html>
<body>
Medusa is Copyright 1996-1997, Sam Rushing (rushing@nightmare.com)
<hr>
<pre>
Medusa is provided free for all non-commercial use. If you are using
Medusa to make money, or you would like to distribute Medusa or any
derivative of Medusa commercially, then you must arrange a license
with me. Extension authors may either negotiate with me to include
their extension in the main distribution, or may distribute under
their own terms.
You may modify or extend Medusa, but you may not redistribute the
modified versions without permission.
<b>
NIGHTMARE SOFTWARE AND SAM RUSHING DISCLAIM ALL WARRANTIES WITH REGARD
TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS, IN NO EVENT SHALL NIGHTMARE SOFTWARE OR SAM RUSHING BE
LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS
SOFTWARE.
</b>
</pre>
For more information please contact me at <a href="mailto:rushing@nightmare.com">
rushing@nightmare.com</a>
<h1> What is Medusa? </h1>
<hr>
<p>
Medusa is an architecture for very-high-performance TCP/IP servers
(like HTTP, FTP, and NNTP). Medusa is different from most other
servers because it runs as a single process, multiplexing I/O with its
various client and server connections within a single process/thread.
<p>
It is capable of smoother and higher performance than most other
servers, while placing a dramatically reduced load on the server
machine. The single-process, single-thread model simplifies design
and enables some new persistence capabilities that are otherwise
difficult or impossible to implement.
<p>
Medusa is supported on any platform that can run Python and includes a
functional implementation of the &lt;socket&gt; and &lt;select&gt;
modules. This includes the majority of Unix implementations.
<p>
During development, it is constantly tested on Linux and Win32
[Win95/WinNT], but the core asynchronous capability has been shown to
work on several other platforms, including the Macintosh. It might
even work on VMS.
<h2>The Power of Python</h2>
<p>
A distinguishing feature of Medusa is that it is written entirely in
Python. Python (<a href="http://www.python.org/">http://www.python.org/</a>) is a
'very-high-level' object-oriented language developed by Guido van
Rossum (currently at CNRI). It is easy to learn, and includes many
modern programming features such as storage management, dynamic
typing, and an extremely flexible object system. It also provides
convenient interfaces to C and C++.
<p>
The rapid prototyping and delivery capabilities are hard to exaggerate;
for example
<ul>
<li>It took me longer to read the documentation for persistent HTTP
connections (the 'Keep-Alive' connection token) than to add the
feature to Medusa.
<li>A simple IRC-like chat server system was written in about 90 minutes.
</ul>
<p> I've heard similar stories from alpha test sites, and other users of
the core async library.
<h2>Server Notes</h2>
<p>Both the FTP and HTTP servers use an abstracted 'filesystem object' to
gain access to a given directory tree. One possible server extension
technique would be to build behavior into this filesystem object,
rather than directly into the server: Then the extension could be
shared with both the FTP and HTTP servers.
<h3>HTTP</h3>
<p>The core HTTP server itself is quite simple - all functionality is
provided through 'extensions'. Extensions can be plugged in
dynamically. [i.e., you could log in to the server via the monitor
service and add or remove an extension on the fly]. The basic
file-delivery service is provided by a 'default' extension, which
matches all URI's. You can build more complex behavior by replacing
or extending this class.
<p>The default extension includes support for the 'Connection: Keep-Alive'
token, and will re-use a client channel when requested by the client.
<h3>FTP</h3>
<p>On Unix, the ftp server includes support for 'real' users, so that it
may be used as a drop-in replacement for the normal ftp server. Since
most ftp servers on Unix use the 'forking' model, each child process
changes its user/group persona after a successful login. This is a
appears to be a secure design.
<p>Medusa takes a different approach - whenever Medusa performs an
operation for a particular user [listing a directory, opening a file],
it temporarily switches to that user's persona _only_ for the duration
of the operation. [and each such operation is protected by a
try/finally exception handler].
<p>To do this Medusa MUST run with super-user privileges. This is a
HIGHLY experimental approach, and although it has been thoroughly
tested on Linux, security problems may still exist. If you are
concerned about the security of your server machine, AND YOU SHOULD
BE, I suggest running Medusa's ftp server in anonymous-only mode,
under an account with limited privileges ('nobody' is usually used for
this purpose).
<p>I am very interested in any feedback on this feature, most
especially information on how the server behaves on different
implementations of Unix, and of course any security problems that are
found.
<hr>
<h3>Monitor</h3>
<p>The monitor server gives you remote, 'back-door' access to your server
while it is running. It implements a remote python interpreter. Once
connected to the monitor, you can do just about anything you can do from
the normal python interpreter. You can examine data structures, servers,
connection objects. You can enable or disable extensions, restart the server,
reload modules, etc...
<p>The monitor server is protected with an MD5-based authentication
similar to that proposed in RFC1725 for the POP3 protocol. The server
sends the client a timestamp, which is then appended to a secret
password. The resulting md5 digest is sent back to the server, which
then compares this to the expected result. Failed login attempts are
logged and immediately disconnected. The password itself is not sent
over the network (unless you have foolishly transmitted it yourself
through an insecure telnet or X11 session. 8^)
<p>For this reason telnet cannot be used to connect to the monitor
server when it is in a secure mode (the default). A client program is
provided for this purpose. You will be prompted for a password when
starting up the server, and by the monitor client.
<p>For extra added security on Unix, the monitor server will
eventually be able to use a Unix-domain socket, which can be protected
behind a 'firewall' directory (similar to the InterNet News server).
<hr>
<h2>Performance Notes</h2>
<h3>The <code>select()</code> function</h3>
<p>At the heart of Medusa is a single <code>select()</code> loop.
This loop handles all open socket connections, both servers and
clients. It is in effect constantly asking the system: 'which of
these sockets has activity?'. Performance of this system call can
vary widely between operating systems.
<p>There are also often builtin limitations to the number of sockets
('file descriptors') that a single process, or a whole system, can
manipulate at the same time. Early versions of Linux placed draconian
limits (256) that have since been raised. Windows 95 has a limit of
64, while OSF/1 seems to allow up to 4096.
<p>These limits don't affect only Medusa, you will find them described
in the documentation for other web and ftp servers, too.
<p>The documentation for the Apache web server has some excellent
notes on tweaking performance for various Unix implementations. See
<a href="http://www.apache.org/docs/misc/perf.html">
http://www.apache.org/docs/misc/perf.html</a>
for more information.
<h3>Buffer sizes</h3>
<p>
The default buffer sizes used by Medusa are set with a bias toward
Internet-based servers: They are relatively small, so that the buffer
overhead for each connection is low. The assumption is that Medusa
will be talking to a large number of low-bandwidth connections, rather
than a smaller number of high bandwidth.
<p>This choice trades run-time memory use for efficiency - the down
side of this is that high-speed local connections (i.e., over a local
ethernet) will transfer data at a slower rate than necessary.
<p>This parameter can easily be tweaked by the site designer, and can
in fact be adjusted on a per-server or even per-client basis. For
example, you could have the FTP server use larger buffer sizes for
connections from certain domains.
<p>If there's enough interest, I have some rough ideas for how to make
these buffer sizes automatically adjust to an optimal setting. Send
email if you'd like to see this feature.
<hr>
<p>See <a href="medusa.html">./medusa.html</a> for a brief overview of
some of the ideas behind Medusa's design, and for a description of
current and upcoming features.
<p><h3>Enjoy!</h3>
<hr>
<br>-Sam Rushing
<br><a href="mailto:rushing@nightmare.com">rushing@nightmare.com</a>
<!--
Local Variables:
indent-use-tabs: nil
end:
-->
</body>
</html>
<h1>Data Flow in Medusa</h1>
<img src="data_flow.gif">
<p>Data flow, both input and output, is asynchronous. This is
signified by the <i>request</i> and <i>reply</i> queues in the above
diagram. This means that both requests and replies can get 'backed
up', and are still handled correctly. For instance, HTTP/1.1 supports
the concept of <i>pipelined requests</i>, where a series of requests
are sent immediately to a server, and the replies are sent as they are
processed. With a <i>synchronous</i> request, the client would have
to wait for a reply to each request before sending the next.</p>
<p>The input data is partitioned into requests by looking for a
<i>terminator</i>. A terminator is simply a protocol-specific
delimiter - often simply CRLF (carriage-return line-feed), though it
can be longer (for example, MIME multi-part boundaries can be
specified as terminators). The protocol handler is notified whenever
a complete request has been received.</p>
<p>The protocol handler then generates a reply, which is enqueued for
output back to the client. Sometimes, instead of queuing the actual
data, an object that will generate this data is used, called a
<i>producer</i>.</p>
<img src="producers.gif">
<p>The use of <code>producers</code> gives the programmer
extraordinary control over how output is generated and inserted into
the output queue. Though they are simple objects (requiring only a
single method, <i>more()</i>, to be defined), they can be
<i>composed</i> - simple producers can be wrapped around each other to
create arbitrarily complex behaviors. [now would be a good time to
browse through some of the producer classes in
<code>producers.py</code>.]</p>
<p>The HTTP/1.1 producers make an excellent example. HTTP allows
replies to be encoded in various ways - for example a reply consisting
of dynamically-generated output might use the 'chunked' transfer
encoding to send data that is compressed on-the-fly.</p>
<img src="composing_producers.gif">
<p>In the diagram, green producers actually generate output, and grey
ones transform it in some manner. This producer might generate output
looking like this:
<pre>
HTTP/1.1 200 OK
Content-Encoding: gzip
Transfer-Encoding: chunked
Header ==> Date: Mon, 04 Aug 1997 21:31:44 GMT
Content-Type: text/html
Server: Medusa/3.0
Chunking ==> 0x200
Compression ==> <512 bytes of compressed html>
0x200
<512 bytes of compressed html>
...
0
</pre>
<p>Still more can be done with this output stream: For the purpose of
efficiency, it makes sense to send output in large, fixed-size chunks:
This transformation can be applied by wrapping a 'globbing' producer
around the whole thing.</p>
<p>An important feature of Medusa's producers is that they are
actually rather small objects that do not expand into actual output
data until the moment they are needed: The <code>async_chat</code>
class will only call on a producer for output when the outgoing socket
has indicated that it is ready for data. Thus Medusa is extremely
efficient when faced with network delays, 'hiccups', and low bandwidth
clients.
<p>One final note: The mechanisms described above are completely
general - although the examples given demonstrate application to the
<code>http</code> protocol, Medusa's asynchronous core has been
applied to many different protocols, including <code>smtp</code>,
<code>pop3</code>, <code>ftp</code>, and even <code>dns</code>.
# we can build 'promises' to produce external data. Each producer
# contains a 'promise' to fetch external data (or an error
# message). writable() for that channel will only return true if the
# top-most producer is ready. This state can be flagged by the dns
# client making a callback.
# So, say 5 proxy requests come in, we can send out DNS queries for
# them immediately. If the replies to these come back before the
# promises get to the front of the queue, so much the better: no
# resolve delay. 8^)
#
# ok, there's still another complication:
# how to maintain replies in order?
# say three requests come in, (to different hosts? can this happen?)
# yet the connections happen third, second, and first. We can't buffer
# the entire request! We need to be able to specify how much to buffer.
#
# ===========================================================================
#
# the current setup is a 'pull' model: whenever the channel fires FD_WRITE,
# we 'pull' data from the producer fifo. what we need is a 'push' option/mode,
# where
# 1) we only check for FD_WRITE when data is in the buffer
# 2) whoever is 'pushing' is responsible for calling 'refill_buffer()'
#
# what is necessary to support this 'mode'?
# 1) writable() only fires when data is in the buffer
# 2) refill_buffer() is only called by the 'pusher'.
#
# how would such a mode affect things? with this mode could we support
# a true http/1.1 proxy? [i.e, support <n> pipelined proxy requests, possibly
# to different hosts, possibly even mixed in with non-proxy requests?] For
# example, it would be nice if we could have the proxy automatically apply the
# 1.1 chunking for 1.0 close-on-eof replies when feeding it to the client. This
# would let us keep our persistent connection.
# -*- Mode: Python; tab-width: 4 -*-
# This is an alternative event loop that supports 'schedulable events'.
# You can specify an event callback to take place after <n> seconds.
# Important usage note: The granularity of the time-check is limited
# by the <timeout> argument to 'go()'; if there is little or no
# activity and you specify a 30-second timeout interval, then the
# schedule of events may only be checked at those 30-second intervals.
# In other words, if you need 1-second resolution, you will have to
# poll at 1-second intervals. This facility is more useful for longer
# timeouts ("if the channel doesn't close in 5 minutes, then forcibly
# close it" would be a typical usage).
import asyncore
import bisect
import time
socket_map = asyncore.socket_map
class event_loop:
def __init__ (self):
self.events = []
self.num_channels = 0
self.max_channels = 0
def go (self, timeout=30.0, granularity=15):
global socket_map
last_event_check = 0
while socket_map:
now = int(time.time())
if (now - last_event_check) >= granularity:
last_event_check = now
fired = []
# yuck. i want my lisp.
i = j = 0
while i < len(self.events):
when, what = self.events[i]
if now >= when:
fired.append (what)
j = i + 1
else:
break
i = i + 1
if fired:
self.events = self.events[j:]
for what in fired:
what (self, now)
# sample the number of channels
n = len(asyncore.socket_map)
self.num_channels = n
if n > self.max_channels:
self.max_channels = n
asyncore.poll (timeout)
def schedule (self, delta, callback):
now = int (time.time())
bisect.insort (self.events, (now + delta, callback))
def __len__ (self):
return len(self.events)
class test (asyncore.dispatcher):
def __init__ (self):
asyncore.dispatcher.__init__ (self)
def handle_connect (self):
print 'Connected!'
def writable (self):
return not self.connected
def connect_timeout_callback (self, event_loop, when):
if not self.connected:
print 'Timeout on connect'
self.close()
def periodic_thing_callback (self, event_loop, when):
print 'A Periodic Event has Occurred!'
# re-schedule it.
event_loop.schedule (15, self.periodic_thing_callback)
if __name__ == '__main__':
import socket
el = event_loop()
t = test ()
t.create_socket (socket.AF_INET, socket.SOCK_STREAM)
el.schedule (10, t.connect_timeout_callback)
el.schedule (15, t.periodic_thing_callback)
t.connect (('squirl', 80))
el.go(1.0)
# -*- Mode: Python; tab-width: 4 -*-
# fifo, implemented with lisp-style pairs.
# [quick translation of scheme48/big/queue.scm]
class fifo:
def __init__ (self):
self.head, self.tail = None, None
self.length = 0
self.node_cache = None
def __len__ (self):
return self.length
def push (self, v):
self.node_cache = None
self.length = self.length + 1
p = [v, None]
if self.head is None:
self.head = p
else:
self.tail[1] = p
self.tail = p
def pop (self):
self.node_cache = None
pair = self.head
if pair is None:
raise ValueError, "pop() from an empty queue"
else:
self.length = self.length - 1
[value, next] = pair
self.head = next
if next is None:
self.tail = None
return value
def first (self):
if self.head is None:
raise ValueError, "first() of an empty queue"
else:
return self.head[0]
def push_front (self, thing):
self.node_cache = None
self.length = self.length + 1
old_head = self.head
new_head = [thing, old_head]
self.head = new_head
if old_head is None:
self.tail = new_head
def _nth (self, n):
i = n
h = self.head
while i:
h = h[1]
i = i - 1
self.node_cache = n, h[1]
return h[0]
def __getitem__ (self, index):
if (index < 0) or (index >= self.length):
raise IndexError, "index out of range"
else:
if self.node_cache:
j, h = self.node_cache
if j == index - 1:
result = h[0]
self.node_cache = index, h[1]
return result
else:
return self._nth (index)
else:
return self._nth (index)
class protected_fifo:
def __init__ (self, lock=None):
if lock is None:
import thread
self.lock = thread.allocate_lock()
else:
self.lock = lock
self.fifo = fifo.fifo()
def push (self, item):
try:
self.lock.acquire()
self.fifo.push (item)
finally:
self.lock.release()
enqueue = push
def pop (self):
try:
self.lock.acquire()
return self.fifo.pop()
finally:
self.lock.release()
dequeue = pop
def __len__ (self):
try:
self.lock.acquire()
return len(self.queue)
finally:
self.lock.release()
class output_fifo:
EMBEDDED = 'embedded'
EOF = 'eof'
TRIGGER = 'trigger'
def __init__ (self):
# containment, not inheritance
self.fifo = fifo()
self._embedded = None
def push_embedded (self, fifo):
# push embedded fifo
fifo.parent = self # CYCLE
self.fifo.push ((self.EMBEDDED, fifo))
def push_eof (self):
# push end-of-fifo
self.fifo.push ((self.EOF, None))
def push_trigger (self, thunk):
self.fifo.push ((self.TRIGGER, thunk))
def push (self, item):
# item should be a producer or string
self.fifo.push (item)
# 'length' is an inaccurate term. we should
# probably use an 'empty' method instead.
def __len__ (self):
if self._embedded is None:
return len(self.fifo)
else:
return len(self._embedded)
def empty (self):
return len(self) == 0
def first (self):
if self._embedded is None:
return self.fifo.first()
else:
return self._embedded.first()
def pop (self):
if self._embedded is not None:
return self._embedded.pop()
else:
result = self.fifo.pop()
# unset self._embedded
self._embedded = None
# check for special items in the front
if len(self.fifo):
front = self.fifo.first()
if type(front) is type(()):
# special
kind, value = front
if kind is self.EMBEDDED:
self._embedded = value
elif kind is self.EOF:
# break the cycle
parent = self.parent
self.parent = None
# pop from parent
parent._embedded = None
elif kind is self.TRIGGER:
# call the trigger thunk
value()
# remove the special
self.fifo.pop()
# return the originally popped result
return result
def test_embedded():
of = output_fifo()
f2 = output_fifo()
f3 = output_fifo()
of.push ('one')
of.push_embedded (f2)
f2.push ('two')
f3.push ('three')
f3.push ('four')
f2.push_embedded (f3)
f3.push_eof()
f2.push ('five')
f2.push_eof()
of.push ('six')
of.push ('seven')
while 1:
print of.pop()
This diff is collapsed.
This diff is collapsed.
# -*- Mode: Python; tab-width: 4 -*-
import string
import regex
RCS_ID = '$Id: http_bobo.py,v 1.2 2001/04/25 19:07:31 andreas Exp $'
VERSION_STRING = string.split(RCS_ID)[2]
class bobo_extension:
hits = 0
SERVER_IDENT = 'Bobo Extension (V%s)' % VERSION_STRING
def __init__ (self, regexp):
self.regexp = regex.compile (regexp)
def __repr__ (self):
return '<Bobo Extension <b>(%d hits)</b> at %x>' % (
self.hits,
id (self)
)
def match (self, path_part):
if self.regexp.match (path_part) == len(path_part):
return 1
else:
return 0
def status (self):
return mstatus.lines_producer ([
'<h2>%s</h2>' %self.SERVER_IDENT,
'<br><b>Total Hits:</b> %d' % self.hits,
]
def handle_request (self, channel):
self.hits = self.hits + 1
[path, params, query, fragment] = channel.uri
if query:
# cgi_publisher_module doesn't want the leading '?'
query = query[1:]
env = {}
env['REQUEST_METHOD'] = method
env['SERVER_PORT'] = channel.server.port
env['SERVER_NAME'] = channel.server.server_name
env['SCRIPT_NAME'] = module_name
env['QUERY_STRING'] = query
env['PATH_INFO'] = string.join (path_parts[1:],'/')
# this should really be done with with a real producer. just
# have to make sure it can handle all of the file object api.
sin = StringIO.StringIO('')
sout = StringIO.StringIO()
serr = StringIO.StringIO()
cgi_module_publisher.publish_module (
module_name,
stdin=sin,
stdout=sout,
stderr=serr,
environ=env,
debug=1
)
channel.push (
channel.response (200) + \
channel.generated_content_header (path)
)
self.push (sout.getvalue())
self.push (serr.getvalue())
self.close_when_done()
# -*- Mode: Python; tab-width: 4 -*-
import re
import string
import time
def concat (*args):
return ''.join (args)
def join (seq, field=' '):
return field.join (seq)
def group (s):
return '(' + s + ')'
short_days = ['sun','mon','tue','wed','thu','fri','sat']
long_days = ['sunday','monday','tuesday','wednesday','thursday','friday','saturday']
short_day_reg = group (join (short_days, '|'))
long_day_reg = group (join (long_days, '|'))
daymap = {}
for i in range(7):
daymap[short_days[i]] = i
daymap[long_days[i]] = i
hms_reg = join (3 * [group('[0-9][0-9]')], ':')
months = ['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec']
monmap = {}
for i in range(12):
monmap[months[i]] = i+1
months_reg = group (join (months, '|'))
# From draft-ietf-http-v11-spec-07.txt/3.3.1
# Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
# Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036
# Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format
# rfc822 format
rfc822_date = join (
[concat (short_day_reg,','), # day
group('[0-9][0-9]?'), # date
months_reg, # month
group('[0-9]+'), # year
hms_reg, # hour minute second
'gmt'
],
' '
)
rfc822_reg = re.compile (rfc822_date)
def unpack_rfc822 (m):
g = m.group
a = string.atoi
return (
a(g(4)), # year
monmap[g(3)], # month
a(g(2)), # day
a(g(5)), # hour
a(g(6)), # minute
a(g(7)), # second
0,
0,
0
)
# rfc850 format
rfc850_date = join (
[concat (long_day_reg,','),
join (
[group ('[0-9][0-9]?'),
months_reg,
group ('[0-9]+')
],
'-'
),
hms_reg,
'gmt'
],
' '
)
rfc850_reg = re.compile (rfc850_date)
# they actually unpack the same way
def unpack_rfc850 (m):
g = m.group
a = string.atoi
return (
a(g(4)), # year
monmap[g(3)], # month
a(g(2)), # day
a(g(5)), # hour
a(g(6)), # minute
a(g(7)), # second
0,
0,
0
)
# parsdate.parsedate - ~700/sec.
# parse_http_date - ~1333/sec.
def build_http_date (when):
return time.strftime ('%a, %d %b %Y %H:%M:%S GMT', time.gmtime(when))
def parse_http_date (d):
d = string.lower (d)
tz = time.timezone
m = rfc850_reg.match (d)
if m and m.end() == len(d):
retval = int (time.mktime (unpack_rfc850(m)) - tz)
else:
m = rfc822_reg.match (d)
if m and m.end() == len(d):
retval = int (time.mktime (unpack_rfc822(m)) - tz)
else:
return 0
# Thanks to Craig Silverstein <csilvers@google.com> for pointing
# out the DST discrepancy
if time.daylight and time.localtime(retval)[-1] == 1: # DST correction
retval = retval + (tz - time.altzone)
return retval
This diff is collapsed.
# -*- Mode: Python; tab-width: 4 -*-
import asynchat
import socket
import string
import time # these three are for the rotating logger
import os # |
import stat # v
#
# three types of log:
# 1) file
# with optional flushing. Also, one that rotates the log.
# 2) socket
# dump output directly to a socket connection. [how do we
# keep it open?]
# 3) syslog
# log to syslog via tcp. this is a per-line protocol.
#
#
# The 'standard' interface to a logging object is simply
# log_object.log (message)
#
# a file-like object that captures output, and
# makes sure to flush it always... this could
# be connected to:
# o stdio file
# o low-level file
# o socket channel
# o syslog output...
class file_logger:
# pass this either a path or a file object.
def __init__ (self, file, flush=1, mode='a'):
if type(file) == type(''):
if (file == '-'):
import sys
self.file = sys.stdout
else:
self.file = open (file, mode)
else:
self.file = file
self.do_flush = flush
def __repr__ (self):
return '<file logger: %s>' % self.file
def write (self, data):
self.file.write (data)
self.maybe_flush()
def writeline (self, line):
self.file.writeline (line)
self.maybe_flush()
def writelines (self, lines):
self.file.writelines (lines)
self.maybe_flush()
def maybe_flush (self):
if self.do_flush:
self.file.flush()
def flush (self):
self.file.flush()
def softspace (self, *args):
pass
def log (self, message):
if message[-1] not in ('\r', '\n'):
self.write (message + '\n')
else:
self.write (message)
# like a file_logger, but it must be attached to a filename.
# When the log gets too full, or a certain time has passed,
# it backs up the log and starts a new one. Note that backing
# up the log is done via "mv" because anything else (cp, gzip)
# would take time, during which medusa would do nothing else.
class rotating_file_logger (file_logger):
# If freq is non-None we back up "daily", "weekly", or "monthly".
# Else if maxsize is non-None we back up whenever the log gets
# to big. If both are None we never back up.
def __init__ (self, file, freq=None, maxsize=None, flush=1, mode='a'):
self.filename = file
self.mode = mode
self.file = open (file, mode)
self.freq = freq
self.maxsize = maxsize
self.rotate_when = self.next_backup(self.freq)
self.do_flush = flush
def __repr__ (self):
return '<rotating-file logger: %s>' % self.file
# We back up at midnight every 1) day, 2) monday, or 3) 1st of month
def next_backup (self, freq):
(yr, mo, day, hr, min, sec, wd, jday, dst) = time.localtime(time.time())
if freq == 'daily':
return time.mktime(yr,mo,day+1, 0,0,0, 0,0,-1)
elif freq == 'weekly':
return time.mktime(yr,mo,day-wd+7, 0,0,0, 0,0,-1) # wd(monday)==0
elif freq == 'monthly':
return time.mktime(yr,mo+1,1, 0,0,0, 0,0,-1)
else:
return None # not a date-based backup
def maybe_flush (self): # rotate first if necessary
self.maybe_rotate()
if self.do_flush: # from file_logger()
self.file.flush()
def maybe_rotate (self):
if self.freq and time.time() > self.rotate_when:
self.rotate()
self.rotate_when = self.next_backup(self.freq)
elif self.maxsize: # rotate when we get too big
try:
if os.stat(self.filename)[stat.ST_SIZE] > self.maxsize:
self.rotate()
except os.error: # file not found, probably
self.rotate() # will create a new file
def rotate (self):
(yr, mo, day, hr, min, sec, wd, jday, dst) = time.localtime(time.time())
try:
self.file.close()
newname = '%s.ends%04d%02d%02d' % (self.filename, yr, mo, day)
try:
open(newname, "r").close() # check if file exists
newname = newname + "-%02d%02d%02d" % (hr, min, sec)
except: # YEARMODY is unique
pass
os.rename(self.filename, newname)
self.file = open(self.filename, self.mode)
except:
pass
# syslog is a line-oriented log protocol - this class would be
# appropriate for FTP or HTTP logs, but not for dumping stderr to.
# TODO: a simple safety wrapper that will ensure that the line sent
# to syslog is reasonable.
# TODO: async version of syslog_client: now, log entries use blocking
# send()
import m_syslog
syslog_logger = m_syslog.syslog_client
class syslog_logger (m_syslog.syslog_client):
def __init__ (self, address, facility='user'):
m_syslog.syslog_client.__init__ (self, address)
self.facility = m_syslog.facility_names[facility]
self.address=address
def __repr__ (self):
return '<syslog logger address=%s>' % (repr(self.address))
def log (self, message):
m_syslog.syslog_client.log (
self,
message,
facility=self.facility,
priority=m_syslog.LOG_INFO
)
# log to a stream socket, asynchronously
class socket_logger (asynchat.async_chat):
def __init__ (self, address):
if type(address) == type(''):
self.create_socket (socket.AF_UNIX, socket.SOCK_STREAM)
else:
self.create_socket (socket.AF_INET, socket.SOCK_STREAM)
self.connect (address)
self.address = address
def __repr__ (self):
return '<socket logger: address=%s>' % (self.address)
def log (self, message):
if message[-2:] != '\r\n':
self.socket.push (message + '\r\n')
else:
self.socket.push (message)
# log to multiple places
class multi_logger:
def __init__ (self, loggers):
self.loggers = loggers
def __repr__ (self):
return '<multi logger: %s>' % (repr(self.loggers))
def log (self, message):
for logger in self.loggers:
logger.log (message)
class resolving_logger:
"""Feed (ip, message) combinations into this logger to get a
resolved hostname in front of the message. The message will not
be logged until the PTR request finishes (or fails)."""
def __init__ (self, resolver, logger):
self.resolver = resolver
self.logger = logger
class logger_thunk:
def __init__ (self, message, logger):
self.message = message
self.logger = logger
def __call__ (self, host, ttl, answer):
if not answer:
answer = host
self.logger.log ('%s%s' % (answer, self.message))
def log (self, ip, message):
self.resolver.resolve_ptr (
ip,
self.logger_thunk (
message,
self.logger
)
)
class unresolving_logger:
"Just in case you don't want to resolve"
def __init__ (self, logger):
self.logger = logger
def log (self, ip, message):
self.logger.log ('%s%s' % (ip, message))
def strip_eol (line):
while line and line[-1] in '\r\n':
line = line[:-1]
return line
class tail_logger:
"Keep track of the last <size> log messages"
def __init__ (self, logger, size=500):
self.size = size
self.logger = logger
self.messages = []
def log (self, message):
self.messages.append (strip_eol (message))
if len (self.messages) > self.size:
del self.messages[0]
self.logger.log (message)
# -*- Mode: Python; tab-width: 4 -*-
# ======================================================================
# Copyright 1997 by Sam Rushing
#
# All Rights Reserved
#
# Permission to use, copy, modify, and distribute this software and
# its documentation for any purpose and without fee is hereby
# granted, provided that the above copyright notice appear in all
# copies and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of Sam
# Rushing not be used in advertising or publicity pertaining to
# distribution of the software without specific, written prior
# permission.
#
# SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
# NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# ======================================================================
"""socket interface to unix syslog.
On Unix, there are usually two ways of getting to syslog: via a
local unix-domain socket, or via the TCP service.
Usually "/dev/log" is the unix domain socket. This may be different
for other systems.
>>> my_client = syslog_client ('/dev/log')
Otherwise, just use the UDP version, port 514.
>>> my_client = syslog_client (('my_log_host', 514))
On win32, you will have to use the UDP version. Note that
you can use this to log to other hosts (and indeed, multiple
hosts).
This module is not a drop-in replacement for the python
<syslog> extension module - the interface is different.
Usage:
>>> c = syslog_client()
>>> c = syslog_client ('/strange/non_standard_log_location')
>>> c = syslog_client (('other_host.com', 514))
>>> c.log ('testing', facility='local0', priority='debug')
"""
# TODO: support named-pipe syslog.
# [see ftp://sunsite.unc.edu/pub/Linux/system/Daemons/syslog-fifo.tar.z]
# from <linux/sys/syslog.h>:
# ===========================================================================
# priorities/facilities are encoded into a single 32-bit quantity, where the
# bottom 3 bits are the priority (0-7) and the top 28 bits are the facility
# (0-big number). Both the priorities and the facilities map roughly
# one-to-one to strings in the syslogd(8) source code. This mapping is
# included in this file.
#
# priorities (these are ordered)
LOG_EMERG = 0 # system is unusable
LOG_ALERT = 1 # action must be taken immediately
LOG_CRIT = 2 # critical conditions
LOG_ERR = 3 # error conditions
LOG_WARNING = 4 # warning conditions
LOG_NOTICE = 5 # normal but significant condition
LOG_INFO = 6 # informational
LOG_DEBUG = 7 # debug-level messages
# facility codes
LOG_KERN = 0 # kernel messages
LOG_USER = 1 # random user-level messages
LOG_MAIL = 2 # mail system
LOG_DAEMON = 3 # system daemons
LOG_AUTH = 4 # security/authorization messages
LOG_SYSLOG = 5 # messages generated internally by syslogd
LOG_LPR = 6 # line printer subsystem
LOG_NEWS = 7 # network news subsystem
LOG_UUCP = 8 # UUCP subsystem
LOG_CRON = 9 # clock daemon
LOG_AUTHPRIV = 10 # security/authorization messages (private)
# other codes through 15 reserved for system use
LOG_LOCAL0 = 16 # reserved for local use
LOG_LOCAL1 = 17 # reserved for local use
LOG_LOCAL2 = 18 # reserved for local use
LOG_LOCAL3 = 19 # reserved for local use
LOG_LOCAL4 = 20 # reserved for local use
LOG_LOCAL5 = 21 # reserved for local use
LOG_LOCAL6 = 22 # reserved for local use
LOG_LOCAL7 = 23 # reserved for local use
priority_names = {
"alert": LOG_ALERT,
"crit": LOG_CRIT,
"debug": LOG_DEBUG,
"emerg": LOG_EMERG,
"err": LOG_ERR,
"error": LOG_ERR, # DEPRECATED
"info": LOG_INFO,
"notice": LOG_NOTICE,
"panic": LOG_EMERG, # DEPRECATED
"warn": LOG_WARNING, # DEPRECATED
"warning": LOG_WARNING,
}
facility_names = {
"auth": LOG_AUTH,
"authpriv": LOG_AUTHPRIV,
"cron": LOG_CRON,
"daemon": LOG_DAEMON,
"kern": LOG_KERN,
"lpr": LOG_LPR,
"mail": LOG_MAIL,
"news": LOG_NEWS,
"security": LOG_AUTH, # DEPRECATED
"syslog": LOG_SYSLOG,
"user": LOG_USER,
"uucp": LOG_UUCP,
"local0": LOG_LOCAL0,
"local1": LOG_LOCAL1,
"local2": LOG_LOCAL2,
"local3": LOG_LOCAL3,
"local4": LOG_LOCAL4,
"local5": LOG_LOCAL5,
"local6": LOG_LOCAL6,
"local7": LOG_LOCAL7,
}
import socket
class syslog_client:
def __init__ (self, address='/dev/log'):
self.address = address
if type (address) == type(''):
self.socket = socket.socket (socket.AF_UNIX, socket.SOCK_STREAM)
self.socket.connect (address)
self.unix = 1
else:
self.socket = socket.socket (socket.AF_INET, socket.SOCK_DGRAM)
self.unix = 0
# curious: when talking to the unix-domain '/dev/log' socket, a
# zero-terminator seems to be required. this string is placed
# into a class variable so that it can be overridden if
# necessary.
log_format_string = '<%d>%s\000'
def log (self, message, facility=LOG_USER, priority=LOG_INFO):
message = self.log_format_string % (
self.encode_priority (facility, priority),
message
)
if self.unix:
self.socket.send (message)
else:
self.socket.sendto (message, self.address)
def encode_priority (self, facility, priority):
if type(facility) == type(''):
facility = facility_names[facility]
if type(priority) == type(''):
priority = priority_names[priority]
return (facility<<3) | priority
def close (self):
if self.unix:
self.socket.close()
This diff is collapsed.
# -*- Mode: Python -*-
# the medusa icon as a python source file.
width = 97
height = 61
data = 'GIF89aa\000=\000\204\000\000\000\000\000\255\255\255\245\245\245ssskkkccc111)))\326\326\326!!!\316\316\316\300\300\300\204\204\000\224\224\224\214\214\214\200\200\200RRR\377\377\377JJJ\367\367\367BBB\347\347\347\000\204\000\020\020\020\265\265\265\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000!\371\004\001\000\000\021\000,\000\000\000\000a\000=\000\000\005\376`$\216di\236h\252\256l\353\276p,\317tm\337x\256\357|m\001@\240E\305\000\364\2164\206R)$\005\201\214\007r\012{X\255\312a\004\260\\>\026\3240\353)\224n\001W+X\334\373\231~\344.\303b\216\024\027x<\273\307\255G,rJiWN\014{S}k"?ti\013EdPQ\207G@_%\000\026yy\\\201\202\227\224<\221Fs$pOjWz\241<r@vO\236\231\233k\247M\2544\203F\177\235\236L#\247\256Z\270,\266BxJ[\276\256A]iE\304\305\262\273E\313\201\275i#\\\303\321\'h\203V\\\177\326\276\216\220P~\335\230_\264\013\342\275\344KF\233\360Q\212\352\246\000\367\274s\361\236\334\347T\341;\341\246\2202\177\3142\211`\242o\325@S\202\264\031\252\207\260\323\256\205\311\036\236\270\002\'\013\302\177\274H\010\324X\002\0176\212\037\376\321\360\032\226\207\244\2674(+^\202\346r\205J\0211\375\241Y#\256f\0127\315>\272\002\325\307g\012(\007\205\312#j\317(\012A\200\224.\241\003\346GS\247\033\245\344\264\366\015L\'PXQl]\266\263\243\232\260?\245\316\371\362\225\035\332\243J\273\332Q\263\357-D\241T\327\270\265\013W&\330\010u\371b\322IW0\214\261]\003\033Va\365Z#\207\213a\030k\2647\262\014p\354\024[n\321N\363\346\317\003\037P\000\235C\302\000\3228(\244\363YaA\005\022\255_\237@\260\000A\212\326\256qbp\321\332\266\011\334=T\023\010"!B\005\003A\010\224\020\220 H\002\337#\020 O\276E\357h\221\327\003\\\000b@v\004\351A.h\365\354\342B\002\011\257\025\\ \220\340\301\353\006\000\024\214\200pA\300\353\012\364\241k/\340\033C\202\003\000\310fZ\011\003V\240R\005\007\354\376\026A\000\000\360\'\202\177\024\004\210\003\000\305\215\360\000\000\015\220\240\332\203\027@\'\202\004\025VpA\000%\210x\321\206\032J\341\316\010\262\211H"l\333\341\200\200>"]P\002\212\011\010`\002\0066FP\200\001\'\024p]\004\027(8B\221\306]\000\201w>\002iB\001\007\340\260"v7J1\343(\257\020\251\243\011\242i\263\017\215\337\035\220\200\221\365m4d\015\016D\251\341iN\354\346Ng\253\200I\240\031\35609\245\2057\311I\302\2007t\231"&`\314\310\244\011e\226(\236\010w\212\300\234\011\012HX(\214\253\311@\001\233^\222pg{% \340\035\224&H\000\246\201\362\215`@\001"L\340\004\030\234\022\250\'\015(V:\302\235\030\240q\337\205\224\212h@\177\006\000\250\210\004\007\310\207\337\005\257-P\346\257\367]p\353\203\271\256:\203\236\211F\340\247\010\3329g\244\010\307*=A\000\203\260y\012\304s#\014\007D\207,N\007\304\265\027\021C\233\207%B\366[m\353\006\006\034j\360\306+\357\274a\204\000\000;'
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
counter.py http_date.py resolver.py medusa_gif.py test_logger.py http_server.py logger.py monitor.py asynchat.py status_handler.py mime_type_table.py monitor_client.py monitor_client_win32.py filesys.py select_trigger.py default_handler.py max_sockets.py __init__.py m_syslog.py producers.py ftp_server.py
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# -*- Mode: text; tab-width:4 -*-
#
# $Id: README,v 1.2 2001/04/25 19:09:54 andreas Exp $
#
Preliminary support for asynchronous sendfile() on linux and freebsd.
Not heavily tested yet.
-Sam
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# make test appear as a package
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# make thread to appear as a package
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment