Allegro CL version 10.1
Unrevised from 10.0 to 10.1.
10.0 version


This document contains the following sections:

1.0 Multiprocessing introduction
   1.1 Data types added to standard Common Lisp
   1.2 The maximum number of simultaneous processes
2.0 Variables and functions
   2.1 Threads and processes
   2.2 Native threads and foreign functions
   2.3 When all wait functions depend on external events
   2.4 Waiting for input from a stream
      2.4.1 mp:process-wait vs mp:wait-for-input-available
3.0 Process functions and variables
4.0 Process pools
   4.1 Steps to using process pools
   4.2 Process-pool example
   4.3 Process-pool functionality
5.0 Processes and their dynamic environments
   5.1 Lisp listeners and special variable bindings
6.0 Gates
   6.1 Semaphores
7.0 Queues
   7.1 Sized queues
8.0 Process locks
9.0 cl:sleep and minimum sleeping time
10.0 OS error code 0xc on Windows
11.0 A simple example of multiprocessing
Appendix A. Wide binding
   Appendix A.1. The bindstack index
Appendix B. Processes and the runtime analyzer

1.0 Multiprocessing introduction

Allegro CL has extensions to support multiprocessing within a single executing Lisp image. This can use a single processor (non-SMP) or, on some platforms, use multiple processors. See smp.htm for information on SMP processing. Note that non-SMP versions of Lisp are available on all supported platforms.

Note that the SMP version works on a machine that has only one processor and using the SMP version may be more efficient on such machines even though (obviously) only one process can run at a time.

All processes share the same Lisp address space, sometimes called the Lisp world. Each process has its own execution stack (i.e. "call" stack) and dynamic variable bindings. All processes share everything else in the Lisp world, including packages, streams, global function bindings, and global values of special variables. (The global value of a special variable is its outermost value, outside any dynamic binding.) The Allegro CL compiler, interpreter, top level, and other subsystems all lambda bind their necessary special variables, and Allegro CL code itself is reentrant. Therefore, multiple processes can correctly maintain any number of correct and independent ongoing computations simultaneously within the single Lisp world.

There are two implementations of multiprocessing in Allegro CL, the native threads model and the virtual threads model. :os-threads appears on the *features* list of the implementation using the native threads model and does not appear on the *features* list of the implementation using the virtual threads model. In earlier releases, the programming interface of the two models had significant differences. These differences are almost all gone in release 7.0.

In the native threads model, each process within Lisp is implemented by an operating system thread and management tasks such as scheduling are done by the operating system. In the virtual threads model, all processes are implemented internally by Lisp.

Although there are many subtle differences between these two implementations, in actual practice it is very rare for actual application code to have any dependency on these differences. Most code developed on one will run without modification on the other. This has been verified during the porting of several preexisting multiprocess-intensive subsystems.

Most symbol names for multiprocessing are in the multiprocessing package, nicknamed mp. Programmers must preface symbols with the package prefix mp: or execute

(use-package :multiprocessing) 

before the first reference to any multiprocessing function. Depending how Lisp is installed the multiprocessing module may not be loaded into the initial Lisp system. To guarantee it is available, execute the form

(require :process) 

before calling any multiprocessing functions, or place this top-level form

(eval-when (:compile-top-level :load-top-level :execute) (require :process))

near the start of any source file which uses multiprocessing.

1.1 Data types added to standard Common Lisp

Multiprocessing adds these user-visible data types to standard Common Lisp:

  1. A process corresponds to the usual operating system notion of a thread. In the :os-threads implementation each process is associated with a single Operating System thread. In the non :os-threads implementation each process is associated with a virtual thread.
  2. A process-lock provides a mechanism for process synchronization. A process-lock is either free or seized by some process. Any process trying to seize a seized lock will block until the lock is free. See Section 8.0 Process locks.

1.2 The maximum number of simultaneous processes

The maximum number of simultaneous processes is the lesser of 2000 and whatever the Operating System will make available. That is really a limit on the number of simultaneous threads: running processes are associated with threads. The garbage collector frees dead threads, so there can be many more than that number of threads used while an application runs, just no more than that number used used at one time.

If a process tries to run when the limit of running processes/threads is reached, an error will be signaled.

2.0 Variables and functions

The descriptions below provide only a brief introduction. Please follow the links to the individual description pages for details.

Name Arguments Notes
symeval-in-process symbol thread

This function returns two values. The first is the value of the special symbol in the given thread (which may be the current thread). It only looks for actual bindings on the thread; if the symbol has a global value but is not bound on the thread, the global value is not returned.

The second returned value describes the status of the binding. t is returned if the symbol is bound on the thread, nil if the symbol has no binding, and :unbound if there is a binding which has been subjected to makunbound. In the latter two cases, the first returned value is nil.

sys:global-symbol-value symbol This function returns two values. The first is the global value for the special variable named by symbol, ignoring any bindings on the current thread and the second is t if the variable has a value (in the sense of boundp). Otherwise the first value will be nil and the second will be the symbol :unbound.
profile-process-p process This function returns the value of the profile-process-p flag for the thread specified by process. If the value of the flag is non-nil, then the space and time runtime analyzers will collect samples when this process is executing.

2.1 Threads and processes

The process object implements the abstraction of independent processes running within the same Lisp world. Process objects are CLOS instances, allowing the application to define subclasses for whatever purpose. Processes are implemented on top of threads; each active process is associated with a particular thread. Thread objects are simple static structures that contain the information used to schedule. Active process threads are managed by the Operating System in native thread and SMP implementations and within Lisp in non-SMP virtual thread implementations. sys:*current-thread* is bound in each thread to the thread object representing this (native or virtual) thread.

A process object is implemented as a CLOS class. Some of its slots are meaningful to user code, but except for those explicitly noted they should be treated as read only.

Whether and when a process runs is controlled by several mechanisms. First the process' initial function and arguments must be specified, and then the process must be 'reset'. (The process-preset function combines these operations.)

Second, a process maintains two lists: its run-reasons and its arrest-reasons. These lists can contain Lisp objects of any type (whether or not each list is empty is the only relevant issue). For a process to be considered for execution, it must have at least one object on its run-reasons list and no objects on its arrest-reasons list.

Finally, a process that needs to wait for some arbitrary condition does so using the process-wait function. This function specifies a function and arguments. When the OS considers a waiting process for running, it causes the wait-function to be applied to the wait-arguments (in the process environment). If a non-null value is returned the process runs and the call to process-wait returns. process-wait is most efficient when the wait function is one of gate-open-p, read-no-hang-p, write-no-hang-p, or stream-listen, but any appropriate function can be used.

It is useful to define some terms for process states. A process is active if it has been preset, has not completed, has at least one run reason, and has no arrest reasons; otherwise, it is inactive. Active processes are further divided into two classes. An active process is waiting if it has executed a process-wait that has not yet completed. An active process is runnable if it is not waiting. In addition, the process actually running at any time is called the current process, or current processes in SMP.

On Windows, even though a process may be waiting, it still typically watches for operating system messages that require a response (because Windows expects threads to do this). However, there are some rare circumstances where this message handling should be suppressed. The macro mp:with-message-interrupts-disabled will do this.

Processes run until complete or interrupted. When a process is run, it is given an amount of processor time; when that time expires the operating system interrupts the process and looks about for other processes which can be run.

A process has two parameters to control scheduling. Its priority is an integer indicating its scheduling priority, higher numbers request more frequent processing. Its quantum indicates the minimum time (in the range 0.1 to 20.0 seconds inclusive) the process should be allowed to run before a clock tick might suspend it, assuming it does not give up execution voluntarily. This is to prevent thrashing between processes, particularly those that might have deep stacks.

The generic function process-join connects processes and suspends one while another completes.

See smp.htm for more information on SMP processes.

2.2 Native threads and foreign functions

In the virtual thread (non :os-threads) model, foreign code is not interruptible and Lisp code cannot run until the foreign code returns control to Lisp, either by completing or by calling back to Lisp. In the native threads (:os-threads) model, Lisp code on one process may run while foreign code runs in another process. This means that certain properties of foreign code may no longer hold. Among them these are the most significant:

2.3 When all wait functions depend on external events

In an SMP Lisp, all live processes are checked from time to time to see if their wait function indicates they should be run. But in a non-SMP Lisp, only changes in the Lisp trigger checking wait functions. External changes (such as a file becoming available, will not in itself cause a Lisp where all processes are waiting to become active. If there is a possibility that all Lisp processes could be waiting for external events in a non-SMP Lisp, you can ensure regular running of wait functions be creating what is called a heartbeat process:

(mp:process-run-function "heartbeat" #'(lambda () (loop (sleep 1))))

This process wakes up every second and this wake up is seen as a Lisp event, which triggers running wait functions, thus ensuring that external events are noticed.

There is no harm in setting up a heartbeat procss in an SMP Lisp, but doing so should not be necessary to avoid the apparent hung state that is possible, as said above, in a non-SMP Lisp. If you find that a heartbeat process seems necessary (that is, an SMP Lisp seems to get into a hung state without a heartbeat process but not with one), please report the circumstances to as it may indicate a larger problem.

2.4 Waiting for input from a stream

This section deals with the situation where no process can run until input arrives from outside of Lisp, typically via a stream. The issue is how to have Lisp not waste machine cycles waiting for the arrival of input, yet have Lisp wake up in a timely manner when the input becomes available.

Each waiting process has an associated wait function. Whenever the process' thread is given processor time, it executes the wait function and if the result is still false, it immediately releases the processor back to the OS. If the OS were to cycle repeatedly through all the waiting processes, the processor will be perpetually busy even though no useful work is being done. This might be reasonable on a dedicated machine -- the wasted processor cycles would not be valuable as there would be nothing else for the processor to do. But this would be bad computing citizenship on a host with other processes unrelated to Lisp, since it would consume processor cycles and paging bandwidth that might be in short supply. So Lisp tries to conserve machine resources by keeping waiting process threads completely quiescent when nothing is runnable, and allowing the OS to give threads processor time to check their wait functions only when something may have changed that could affect the result of one or more wait functions. However, Lisp needs a little help to do this.

There are only three things that can cause a wait function to return true after previously having returned nil: (1) some other running Lisp process has changed some piece of state in the Lisp world that the wait function tests; (2) an asynchronous signal arrives; or (3) input becomes available on a file descriptor (generally associated with a Lisp stream) that the wait-function tests.

Case (1) requires that wait functions be tested periodically whenever (or at least immediately after) any Lisp process actually runs. The operating system thread management does this automatically. But what should happen when absolutely no processes are runnable? We want Lisp to stop running completely until either (2) or (3) happens.

When a process finds that its wait function is not yet satisfied, it releases the CPU by performing the equivalent of a Unix select() on the set of "interesting" file descriptors. This causes the process to block. The OS will not give the associated thread any CPU time until a signal arrives (2) or input becomes available on one of the interesting file descriptors (3). The process can run its wait-function again to determine whether it has actually become unblocked.

Unfortunately, the process machinery has no way to correlate wait functions with OS file descriptors, such that input becoming available on the file descriptor would (or might) cause the wait-function to return true. The system needs to be told explicitly that a file descriptor is interesting to some wait-function. The low-level input handlers (e.g., for read-char) for Allegro CL streams do this automatically, but any user implementing custom streams and/or making a low-level foreign-function interface to (for example) the underlying operating-system socket interface will need to write input functions in such a way that they inform the process machinery about input file descriptors. The description of wait-for-input-available describes how this is done.

2.4.1 mp:process-wait vs mp:wait-for-input-available

The purpose of wait-for-input-available is to wait on one or more input streams. wait-for-input-available takes a wait function just like process-wait, but before suspending the calling process it carefully records the input file descriptors for each stream or file descriptor argument. While the calling process is blocked inside the call to wait-for-input-available, these file descriptors will be passed to select() so that available input will immediately return from select() and awaken this thread.

If process-wait is used instead of wait-for-input-available, the thread may fail to notice when input becomes available and not run the wait function until some other possibly-unrelated interrupt or input causes the entire Lisp image to wake up and eventually run through all wait functions again.

3.0 Process functions and variables

The descriptions below provide only a brief introduction. Please follow the links to the individual description pages for details.

Name Arguments Notes
*all-processes* [variable] The value of this variable is a list of all processes that have ever been created and have never completed or been killed.
*current-process* [variable] The value of this variable is the process currently running (:os-threads) or which the scheduler is currently running (non :os-threads). In non :os-threads implementations, nil is returned if the scheduler itself is running.
*default-process-quantum* [variable] Default quantum given to each process.
*disallow-scheduling* [variable]

This special variable is bound to t whenever multiprocessing scheduling is disabled. For example, the system binds this variable to t during the execution of the forms within a without-scheduling form.

This variable should be treated as read-only and should never be set or bound by user code.

make-process &key name reset-action run-reasons arrest-reasons priority quantum resume-hook suspend-hook initial-bindings message-interrupt-function stack-allocation run-immediately This function returns a new process object, but does nothing about making it runnable. Follow the link to the full description for details.
process-add-arrest-reason process object This function adds object to the list of arrest-reasons for process.
process-add-run-reason process object This function adds object to the list of run-reasons for process.
process-allow-schedule &optional other-process This function allows processes other than the executing (and hence calling) process to run. All other processes of equal or higher priority will have a chance to run before the executing (and thus calling) process is next run. The optional argument is ignored, but if supplied it should be another process.
process-arrest-reasons process This function returns the list of arrest-reasons for process.
process-disable process This function makes process inactive by revoking all its run and arrest reasons. The effect is immediate if a process disables itself.
process-enable process Makes process active by removing all its run and arrest reasons, then giving it a single run reason of :enable.
process-initial-bindings process This slot of a process stores an alist of initial special bindings which are established in a process when it is started. The value may be set with setf.
process-initial-form process This function returns a cons of the initial function of process and its argument list.
process-interrupt process function &rest args This function forces process to apply function to args when it next executes. When function returns, the original computation of process continues. If process is waiting when interrupted, it runs the interrupt function and then continues waiting. If process is not active, process-interrupt makes it active for the interrupt function, then makes it inactive again.
process-join process &key (errorp t) error-values (search-list '(:current :previous :next)) Suspend the current process until mp:process-thread of process exits, and then return the list of result values of the preset function of process. See function description for behavior when process does not exit normally and other details.
process-kill process This function resets the process to unwind it, then removes it from consideration by the scheduler and from the *all-processes* list.
process-name process This function returns the name of process, which must be a string. This value may be changed with setf.
process-name-to-process name &key :abbrev :error This function returns the process whose process-name is name. name must be a string. If the abbrev keyword argument is specified non-nil, then name is matched to the beginning of each process-name to find a match.
process-preset process function &rest arguments This function sets the initial function and arguments of process, then resets any computation in progress in it. This does not make process active if it was not already active.
process-priority process This function returns the priority of process. It defaults to 0 and may be set to any fixnum with setf.
process-property-list process The property-list slot of a process implements a generalized property list as a convenient place to store additional information about a process.
process-quantum process This function returns the quantum for process. The quantum may be specified when the process is created; it defaults to the value of *default-process-quantum* and may be set to any real value between 0.1 and 20 with setf.
process-reset process &optional no-unwind kill This function causes process when it next runs to throw out of its present computation, if any, then apply its initial function to its initial argument.
process-resume-hook process It is normal for execution of a process to be interrupted many times. This is transparent to the process and usually it is not necessary for the process to know when its execution is suspended and resumed. However, if these slots are non-nil, they should be functions of no arguments which are called on the process' stack-group or thread each time the execution is suspended or resumed (but not when the process is first started or when it is killed).
process-revoke-arrest-reason process object This function removes object from the list of arrest reasons for process.
process-revoke-run-reason process object This function removes object from the list of run reasons for process.
process-run-function name-or-keywords function &rest args This function does a make-process, then presets the new process with function and args. The first argument is either a string, which is the name of the process, or is a list of keyword arguments accepted by make-process. The new process is returned. By default, the process is killed when and if it completes.
process-run-reasons process This function returns the list of run-reasons for process.
process-run-restartable-function name-or-keywords function &rest args This function is just like process-run-function (just above), but automatically provides a :reset-action argument of t. The process thus started will restart if it is reset or completes.
process-runnable-p process These functions return t if, respectively, process is runnable or active. A process is active if it has been reset and not yet completed, and has at least one run reason and no arrest reasons. It is runnable if it is active and not waiting.
process-sleep seconds &optional whostate process-sleep suspends the current process for at least the number of seconds specified. That number may be any non-negative, non-complex number. While the process sleeps, other processes are allowed to run. The whostate (default "Sleep") is a string which temporarily replaces the process' whostate during the sleep.

When multiprocessing is initialized, Common Lisp function sleep is changed to be equivalent to process-sleep. Instead of causing the entire Lisp world to suspend execution for the indicated time, only the executing process is suspended. This is usually the desired action.

process-thread process Returns the thread associated with process.
process-wait whostate function &rest arguments This function suspends the current process (the value of *current-process*) until applying function to arguments yields true. The whostate argument must be a string which temporarily replaces the process' whostate for the duration of the wait. This function returns nil.

See the discussion under the headings Section 2.4 Waiting for input from a stream and Section 2.4.1 mp:process-wait vs mp:wait-for-input-available.

process-wait-args process This function returns the list of arguments passed to the wait-function of process.
process-wait-function process This function returns the function used to determine when a waiting process becomes runnable.
process-wait-with-timeout whostate seconds function &rest args This function is similar to process-wait, but with a timeout. The units of time are seconds. The value of seconds may be any real number. Negative values are treated the same as 0. The wait will timeout if function does not return true before the timeout period expires.
process-whostate process This function returns the current who-line string of process.
start-scheduler nil

:os-threads: initializes multiprocessing (the function is misnamed since there is no scheduler but used for consistency with the non :os-threads implementation.)

Non :os-threads: start the scheduler process and initialize multiprocessing.

without-interrupts &body body This macro executes body protecting against any handling of asynchronous interrupts. Execution of body is guaranteed to complete without any other process running, or any asynchronous interrupt being dispatched, unless the process does something to block or otherwise explicitly yield to the scheduler (e.g., with process-allow-schedule). It is an error to call a heap-releasing foreign function within the scope of without-interrupts.
without-scheduling &body body This macro inhibits the OS (in :os-threads) or the scheduler (non :os-threads) from suspending a process involuntarily (asynchronously) during the execution of body. This always works in non :os-threads versions since the scheduler is a Lisp process. However, in :os-threads versions, the OS will run another process if the current process blocks, waits, or executes a process-allow-schedule.
wait-for-input-available streams &key :wait-function :whostate :timeout This lower-level function extends the capabilities of process-wait and process-wait-with-timeout to allow a process to wait for input from multiple streams and to wait for input from a file.
sys:with-timeout (seconds . timeout-forms) &body body This macro evaluates the body as a progn body. If the evaluation of body does not complete within the specified interval, execution throws out of the body and the timeout-forms are evaluated as a progn body, returning the result of the last form. The timeout-forms are not evaluated if the body completes within seconds.
mp:with-virtual-timeout (mode seconds timeout-forms) &body body This macro is similar to sys:with-timeout in that it evaluates the body as a progn body, throwing out of execution if the timeout is reached, but the time counted is just Lisp process CPU time (when mode is :process) not counting gc time, or just individual thread CPU time (when mode is :thread), again no counting gc time. mode can be :thread in SMP Lisps only. If the evaluation of the body does not complete within the specified interval, execution throws out of the body and the timeout-forms are evaluated as a progn body, returning the result of the last form. The timeout-forms are not evaluated if the body completes before the time has expired.

4.0 Process pools

The simple way to launch a process to perform some task is to create a function that performs the task and to call mp:process-run-function with that function as the second (function to run) argument. A process is then created, the function runs to completion, and the process is then terminated.

That way works fine for functions that either never complete or run for a very long time, but for functions that run for a short time and then finish, each call pays the price of creating and later deleting the associated process. Processes are heavyweight objects, so the creation and deletion overhead is significant (this is particularly true on the Windows operating system). The process pool facility is designed to reduce or even nearly eliminate this overhead. The process pool facility was added to version 10.1 in a patch released early in 2019.

A process pool maintains a collection of processes. Each is either active or idle. When a request is made to run a function, typically by a call to mp:process-pool-run (the process pool equivalent of mp:process-run-function), an idle process, if there is one, becomes active and runs the function. If no process in the pool is idle, then, depending on characteristics of the pool, the system either a creates a new process or waits until a process becomes available and then runs the function. The number of processes in the pool is limited. If that limit has not been reached, new processes are created when needed. If it has, the request to run waits for an existing process to become available.

Process pools work and are useful in both SMP Lisps and non-SMP Lisps.

4.1 Steps to using process pools

To use the process pool facility, do the following:

  1. Load the functionality. Evaluate (require :process-pool). All symbols supporting process pool functionality are exported from the multiprocessing (nicknamed mp) package. Evaluating (use-package :mp) will free you from having to package qualify process pool-related symbols but is not necessary. Process pool examples will assume (use-package :mp) has been evaluated.
  2. Decide whether to use the default process pool or to create your own. No process-pool exists when when the process-pool module is loaded. The initial default process pool is created (if desired) by calling ensure-default-process-pool. ensure-default-process-pool can also be used to modify the default process pool or to recreate it if it is deleted. Calling that function will either return the existing default pool or (if there is no default pool) create and return a new one. In functions like process-pool-run, there is a pool argument (typically the first required argument). Specifying nil means use the default process pool. Additional process pools can be created with make-process-pool. The process-pool class can be subclassed so methods can be customized for your partiicular needs.
  3. Determine what work you wish to do with processes. The processes will run a function called with specified arguments. You must provide these functions and decide on appropriate arguments. The functions you wish to run are called work-functions.
  4. Initiate processes with process-pool-run. When you have the work-functions to call and know the suitable arguments, call process-pool-run for each work-function/argument set. process-pool-run creates and returns a process-pool-work-item, which is a struct that contains information about what to run (including the work-function and the arguments) and what has been done (whether work has started, is going on, or has finished). Idle existing processes, if any, are assigned work items and run them. New processes are created as needed (up to the process-pool-active-limit) and assigned work items. When no more processes can be created, work items are queued. If process-pool-work-limit for the process pool is non-nil, then when the queue gets too large, process-pool-run stops queueing work-items but still creates them. In that case, process-pool-run returns nil as its first returned value. Your program must catch the fact that the work-item was neither run nor queued and resubmit it if necessary. The work-item passed to or created by process-pool-run is always returned by process-pool-run as its second returned value.

Report functions

The work function running in a pool process does not need to be modified or coded specially to run in a pool process. The progress of a work item can be monitored by report functions that are supplied in addition to the work function.

There is a report-start function and a report-end function. The report-start function will be run when the work-item starts to run and the report-end function when the work function completes.

A process-pool has report-start and report-end slots whose values are function designators (or nil). These will be used if the work-item does not have its own functions specified in its report-start and report-end slots.

Report-start functions are passed one argument: the work-item being run. Report-end functions are passed three arguments: the work-item, a list of values returned by the work function, and the error if one was signaled by the work function or nil if no error was signaled. Errors in the report functions are ignored silently and not reported anywhere. Errors in the work function do not cause computation to break but do cause the function to stop executing and the report-end function to be called. The error slot of the work-item is modified with information about the error signaled.

If the work-item's report-start slot is not given a value when the work-item is created (that is, the report-start argument is unspecified in the call to make-process-pool-work-item or in the call to process-pool-run) or is given the value nil, then the process-pool's report-start function is used. If the process-pool's report-start slot value is not nil, there is no way to prevent a report-start function from running. If you want nothing done by the report-start function, define a function that does nothing and make it the value of the work-item's report-start slot. Similarly for the report-end function.

Creating and modifying process pools

Calling ensure-default-process-pool creates and returns a process-pool object if no default process pool exists. That new process pool becomes the default process pool. If there is a default process pool, ensure-default-process-pool just returns it. ensure-default-process-pool can modify an existing default process pool by specifying values for arguments different from the current default process pool values.

Similarly, other process-pool objects can be created or modified with make-process-pool. That function can be applied to an existing process pool in order to change aspects of it, such as the maximum number of active processes or the default report-start or report-end functions.

Creating and modifying work-items

A process-pool-work-item is a structure that stores information about what is to be done when the work item is run. Such information includes the function to call and the arguments to that function, and report-start and report-end functions if you do not want to use the defaults in the process-pool. There is a slot for additional data that may be accessed when the work item is being run. The variable *process-pool-work-item* is bound to the work-item struct in the process running the work item, allowing easy access to the work-item struct while processing.

Work items are created as a side effect of calling process-pool-run. Arguments to that generic function can be specified to set various slots of the newly created process-pool-work-item. There is also a work keyword argument which can be used to specify an existing process-pool-work-item to be run.

The function make-process-pool-work-item can be used to create a new process-pool-work-item or to modify an existing one. Modifying a running process-pool-work-item does not cause an error to be signaled but is unsafe and should not be done. The function process-pool-work-item-active-p returns true when a work item is running or queued to run. If that function returns nil, it is safe to modify the argument work item (modulo the usual warnings about multiple processes doing things to the same process pools and work items).

What process-pool-run does to a work-item

Calling process-pool-run adds a newly-created or an existing process-pool-work-item (depending on the value of the work argument) to the process-pool specified by the pool argument. If that pool has idle processes, one will be assigned the work-item. If there are no idle processes but fewer than process-pool-active-limit processes are active, a new process will be created and it will run the work-item. If process-pool-active-limit are running, the work-item will be added to the process pool's queue (you can specify with the queue-p argument that the work-item should either run immediately or not at all). If the queue is full or queue-p was true and the work-item could not immediately run, the call to process-pool-run returns nil and the associated work-item will not be run absent further action by the program (that is, your program has to catch the fact that process-pool-run returned nil and pass the work-item to process-pool-run again if you want it to run). The work-item is always returned as the second returned value regardless of what process-pool-run did to the work-item.

When the work-item is run, its report-start function is called with the work-item as its single argument, then the work-item's function is applied to the specified arguments, and when the function returns, the report-end function is called with three arguments: the work-item, the values returned by the function, and the error object if an error was signaled or nil if no error was signaled.

The function discard-process-pool-work-item can be called to undo the effects of process-pool-run. It will dequeue the work-item if it is queued but not running and return :dequeued. It will return :idle if the work-item is neither queued or running (including when it has completed). The behavior when the work-item is running depends on the kill keyword argument. See the description of discard-process-pool-work-item for details.

Canceling work-items while they are running

The report-start and report-end functions can call cancel-process-pool-worker, as can the work function. The worker process performing the work item is then immediately terminated and discarded (that is, not returned to the idle list). In the report-start function or in the work function, this call would normally be made if it is determined that further work is unnecessary or harmful. In the report-end function, calling cancel-process-pool-worker differs from simply returning (which also end the work of a work item) because simply returning causes the associated process to return to the idle list while calling cancel-process-pool-worker causes that process to be discarded. When called in the report-start function, the work function is not called. The report-end function is called.

The report-start function and the work function can also call cancel-process-pool-work. If called in the report-start function, that function exits and neither the work function nor the report-end function is called. If called in the work function, the work function exits and the report-end function is not called. cancel-process-pool-work called anywhere else (including in a report-end function) simply returns nil without doing anything else.

Process pool information functions

There are several generic functions (actually class readers) that provide information about the activity of process pools. Among these are the following. All take a process-pool argument. The value nil means the default process pool. Values are generally valid the instant they are returned but can immediately become outdated.

4.2 Process-pool example

This example performs long arithmetic operations on segments of a large array. Because the segments are disjoint, the operations can be run in parallel. Parallel operation is managed in part by barrier functionality, which is described in The barrier API in smp.htm. See in particular make-barrier, barrier-pass-through, and barrier-wait.

;; A simple process-pool usage example.

(in-package :user)
(eval-when (compile load eval)
  (require :process-pool)
  (require :smputil)
  (use-package :mp))

(defparameter *in* (make-array 1000000))
(defparameter *out* (make-array 1000000))
(defvar *old-work* nil)

(defun big-array-handler (in-array out-array 
                          start-index end-index 
                          &optional (iterations 20)
			  &aux (done 0))
  ;; A function that can run for a measurable amount of time, 
  ;; and one that can operate on a shared data structure in
  ;; several parallel threads.
  ;; This function performs long arithmetic operations on a segment of
  ;; a large array.  Disjoint segments can be processed in parallel
  ;; without interference.
  ;; Stay in fixnum operations to avoid consing that could interfere 
  ;; with parallel operation.
  ;; IN-ARRAY and OUT-ARRAY should be arrays of the same size
  ;; whose values are small fixnums or NIL. 
  ;; START-INDEX and END-INDEX are indices into these arrays
  ;; ITERATIONS is a positive integer.
  ;; This function iterates ITERATIONS times, modifying
  ;; the chunks of the two arrays from
  ;; START-INDEX and END-INDEX. The actual operations are not
  ;; important: they involve arithmetic calculations with
  ;; some checks to ensure values are suitable. Their purpose
  ;; is to take time so we can compare using one process
  ;; versus several in a pool.
  (dotimes (i iterations done)
    (dotimes (j (- end-index start-index))
      (let ((in (aref in-array (+ start-index j)))
	    (out (aref out-array (+ start-index j))))
	(if (or (null in) (eql in 0))
	    (setf (aref in-array (+ start-index j))
                  (setq out (truncate end-index (1+ i))))
	  (setq out (* (truncate in (1+ i)) (* i in))))
	(setf (aref out-array (+ start-index j)) out)
	(incf done)))))

(defun simple-parallel-task (&key n in out (r 20)
			     (endmark (make-barrier (1+ n)))
			     (width (length in))
			     (chunk-width (ceiling width n))
			     (work-done (list 0)))
  (ensure-default-process-pool)  ;; We make sure there is
                                 ;; a default process pool
  (dotimes (i n) ;; We run N pool processes
    (push (multiple-value-list
	    nil :function #'big-array-handler
	    :arguments (list in out (* i chunk-width) 
                             (min width (* (1+ i) chunk-width)) r)
              ;; We call BIG-ARRAY-HANDLER with a chunk of the array
              ;; where the chunk is the (/ array-size n) suitably
              ;; adjusted to prevent overlap.
	    :data endmark
	    :report-end (lambda (work v err) ;; Report that a pool
                                             ;; process has completed its work.
			  (declare (ignore err))
			  (incf-atomic (car work-done) (car v))
                             (process-pool-work-item-data work)))))
  (barrier-wait endmark) ;; Wait until all process-pool processes have 
                         ;; finished


        ;;; Some sample runs. Timing depends on factors like
        ;;; machine speed and amount of physical memory so you
        ;;; will likely see different timings but the ratio of
        ;;; timings should be similar.

        ;;; A run in one thread to establish a baseline timing.

cl-user(74): (time (simple-parallel-task :n 1 :in *in* :out *out* :r 50))
; cpu time (non-gc) 4.709284 sec user, 0.000000 sec system
; cpu time (gc)     0.000000 sec user, 0.000000 sec system
; cpu time (total)  4.709284 sec user, 0.000000 sec system
; cpu time (thread) 0.001000 sec user, 0.000000 sec system
; real time  4.708741 sec (100.0%)
; space allocation:
;  153 cons cells, 6,432 other bytes, 0 static bytes
; Page Faults: major: 0 (gc: 0), minor: 0 (gc: 0)

        ;;; A run in three threads takes about 1/3 of the time.
        ;;; The CPU time is the sum of the times of all processes
        ;;; so since 3 ran, the REAL time is roughly 1/3 CPU time.

cl-user(76): (time (simple-parallel-task :n 3 :in *in* :out *out* :r 50))
; cpu time (non-gc) 4.672289 sec user, 0.000000 sec system
; cpu time (gc)     0.000000 sec user, 0.000000 sec system
; cpu time (total)  4.672289 sec user, 0.000000 sec system
; cpu time (thread) 0.001000 sec user, 0.000000 sec system
; real time  1.592101 sec (293.5%)
; space allocation:
;  317 cons cells, 11,920 other bytes, 0 static bytes
; Page Faults: major: 0 (gc: 0), minor: 0 (gc: 0)


4.3 Process-pool functionality

All symbols are in the multiprocessing (nicknamed mp) package. Process-pool functionality is loaded into a Lisp image by evaluating

(require :process-pool)




process-pool readers

process-pool-work-item readers

See the description of process-pool-work-item for details about the slots of a work-item. The descriptions of the functions below simply link to the process-pool-work-item description.

5.0 Processes and their dynamic environments

The multiprocessing system described in this chapter is a nonstandard but upward-compatible extension to Common Lisp. Although multiprocessing can be used within a single application towards performing a single, integrated task, it is an important design feature that multiprocessing allows multiple unrelated tasks to reside in the same Lisp image. A user might run a window system, several independent Lisp listeners, a Lisp editor, and an expert system application (with network connections to processes in other hosts) all in the same Lisp world.

For independent tasks to coexist in a single world, the names of functions and special variables in separate systems cannot interfere with one another. This is the problem the package system is intended to solve, and with a little care it is quite possible to keep names separate. However, there are a number of public common-lisp package special variables which are implicitly used by all Lisp applications. Each process may want its own values assigned to these special variables. This section discusses how conflicting demands on these variables can be supported.

For example, according to the Common Lisp standard, the default value of the special variable *print-base* is 10. A stand-alone application may assume that if it never changes *print-base* its value will be 10. Or, if a stand-alone application always wanted to print in octal, it might set *print-base* to 8. The effect on its own calculation is well defined by the Common Lisp standard, but changing the global value of *print-base* may have an unfortunate effect on other unrelated applications running in the same Lisp world. If one application changes *print-base* what about others that assume it will always have the default value, or want to exercise independent control?

A solution to this problem is to make a process that uses public variables maintain its own bindings for those variables. When a process is first started, a stack-group or thread is created on which it will run. Whenever the process' computation binds special variables, those bindings are recorded on its stack-group or thread. Binding hides the value of special variables with (possibly) different values. Inside the scope of a binding, code will only see the bound value, not the original value, of a variable. Special variable bindings are local to the process that creates them and are never seen by other processes.

Note that there is only one global "binding" for a variable. While it always exists and is shared by all processes, if the variable has been bound in the process to another value, code inside that binding in that process will only see the bound value, not the global value. This is important to understand when there are multiple processes. Sometimes it is useful for processes to communicate through the value in a global binding. Other times processes must be protected from seeing the value other processes have bound a variable to.

Assume you have several processes and none of them binds the special variable *print-base*. If one process sets the variable, the global binding is the one affected and all of the processes will see the new value. However, any process that binds the variable as a special will not see changes to the global binding while its own binding is in effect, and conversely, any setq it does will not affect the value seen by the others.

The multiprocessing system provides a mechanism to allow processes to bind special variables at process startup time. These bindings are done in the process when it starts up so any reference to a global variable as a value will see the sys:global-symbol-value of that symbol.

First of all, the bindings specified by *required-thread-bindings* are established for all new processes (not controlled by any argument or slot). The initial value of this list is a alist associating various variables with values. The only exported symbol appearing in an entry is (*readtable* . *readtable*). This means that the global variable *readtable* is bound to the global symbol value of *readtable* which is the original Lisp readtable (which cannot be modified). This is often not what you want, but specifying a different value should be done as we describe just below. This binding is established here to avoid accidentally changing the global symbol value of *readtable* (which is permitted, of course; what is not permitted is changing the initial readtable itself).

Then the process-initial-bindings slot of the process is examined. If not nil, it should be an alist of symbols and forms to evaluate for value. The symbols are bound in the process as special variables with the given initial values, effectively wrapping those bindings around the entire execution of that process. If a particular variable appears more than once on the alist, entries after the first are ignored. It is here you should bind *readtable* if desired.

By default, make-process and process-run-function create processes with a null process-initial-bindings list. Such action is not appropriate for processes which may do arbitrary computations - for instance, a Lisp listener which accepts computations from a user - or for applications which need to be isolated from others. In such cases, the multiprocessing system provides a list of special variables with appropriate default values. The variable excl:*required-top-level-bindings* is bound to that list.

Another binding automatically established is *current-process*. When a process is running that process is the value of *current-process*.

The standard CL stream variables are bound to the value of *terminal-io* but that variable itself is not given a dynamic binding. This can cause problems because sometimes *terminal-io* may be set to a stream that will signal an error when used (see debugging.htm). The variable excl:*initial-terminal-io* holds the original *terminal-io* stream when Lisp starts up. It may be useful for processes that aren't connected to a usable *terminal-io* but wish to produce some output, for example for debugging.

Note (again) that the value forms are evaluated in the dynamic environment of the new process, not the process that created it, and this new dynamic environment has no special bindings in effect. Those value forms that themselves depend upon special variable values will therefore see the global values of those variables. The intention of this mechanism is that the new process should not inherit variable values from the process that started it without something explicit being done to pass the value. Some other ways to achieve the same end are shown below.

If, for example, you want your new process to share the readtable of the invoking process, putting an entry

(*readtable* . *readtable*) 

on the alist would not work. The value form would be evaluated on the new process, and the global value of *readtable* would result. Instead, you should do something like this (note that this is a code fragment - you must add forms where there are suspension points):

       `(:name ... 
          ((*readtable* . ',*readtable*) 

Since the :initial-binding list is treated as an alist, the first entry shadows all succeeding ones. The effect of the above will be to set the alist cons for *readtable* to something like:

(*readtable* . (quote #<readtable @ #x504a1>))

where the readtable object is the readtable of the invoking process, and the quote is stripped off by the evaluation when the new process begins.

Using process-run-function and related functions it is possible to run any kind of Lisp computation as a separate quasi-parallel process. All processes share the same Lisp world, so the only thing that differentiates one process from another (besides the state of its computation, of course) is the set of special-variable dynamic bindings of its binding stack. For example, in an environment with multiple windows it is possible to run multiple top-level Lisp listeners or other interactive command loops, each process in its own window; *terminal-io* would be bound in each process to a stream object connected to its particular input-output window.

There are two ways to create special bindings in a new process. The simple way is just to place the special variable on the lambda list of the process' initial function, and pass the value as an argument (note that this is a code fragment - the suspension points indicate where additional forms are required):

(defun my-command-processor (*terminal-io* *page-width*)
  (declare (special *page-width*))
(let ((my-window (create-a-window ...)))
  (process-run-function "Bizarre Command Interpreter"
    #'my-command-processor my-window
    (page-size my-window))) 

However, this requires the process' initial function to provide for each variable that will be bound. The following more general idiom permits establishment of arbitrary bindings for the new process:

(defun establish-bindings (symbol-list value-list function args)
  (progv symbol-list value-list (apply function args)))
(let ((my-win (create-a-window ...)))
  (process-run-function "Bizarre Command Interpreter"
    '(*user-name* *phone-number*)
    (list "Fred" "555-1234")
    (list my-win (window-size my-win)))) 

Here the establish-bindings function is wrapped around the application of my-command-interpreter function to its arguments; while the function is executing it will see bindings of the special variables user-name and phone-number.

See required-thread-binding and required-top-level-binding which are macros which add elements to the *required-thread-bindings* and *required-top-level-bindings* alists.

5.1 Lisp listeners and special variable bindings

The problem of starting a real Lisp listener is actually somewhat more complex than the above discussion and sample code fragments suggest. For one thing, the Lisp system defined by Common Lisp depends on a large number of special variables, such as *readtable*, *package*, and *print-level*. Since commands executed by one Lisp listener might side effect some variables and thereby inappropriately affect other processes, each process should maintain separate bindings for these variables. Allegro CL provides a wrapping function, tpl:start-interactive-top-level, which automatically binds to reasonable default initial values all the special variables in the Lisp system. Within a call to tpl:start-interactive-top-level, a read-eval-print loop can be started with top-level-read-eval-print-loop.

Thus, the Allegro CL idiom for running a standard Lisp listener communicating with *terminal-io* (this might be done in a excl:*restart-app-function*) looks something like this:

(tpl:start-interactive-top-level *terminal-io* 

And the idiom for running a standard Lisp listener inside a window looks something like this:

(process-run-function "My Lisp Listener"

Entry to tpl:top-level-read-eval-print-loop establishes additional bindings for certain variables used by the top level loop. These bindings are established inside any bindings established by start-interactive-top-level. The variables and initial binding values are taken from the alist bound to the variable *default-lisp-listener-bindings*.

tpl:top-level-read-eval-print-loop also provides a hook to customize individual Lisp listeners. It calls the function which is the value of *top-level-read-eval-print-loop-wrapper*, with two arguments, the internal function to invoke the read-eval-print loop, and the argument it requires.

In this rather trivial example, we define the wrapper so that the command character (the value of *command-char*, initially #\:) is #\$. Of course, in a real example, something more complex would be done, but the form would be similar to this example.

(defun my-lisp-listener-wrapper (function args)
  (let ((tpl:*command-char* #\$))
    (apply function args)))
;; After we evaluate the following forms, the command character will be $
;; (rather than :) in any new lisp listener process.
(setq tpl:*top-level-read-eval-print-loop-wrapper*

What about setting up personal customizations? When Allegro CL is first started it searches for files as described in startup.htm. The typical purpose of a file is to load other files and to define personalized functions and top-level commands, but it is possible for a file to setq special variables. It is important to understand what it means to do so.

A file is loaded into a Lisp before multiprocessing is started and even before the initial Lisp listener is created. If a file sets any special variables, the value affected will (in general) be the global value of the special. If the purpose is to customize Lisp for some particular application this is probably the wrong way to do it. Someday an unrelated system may be loaded into the same Lisp world which may depend on reasonable default global variable values. Furthermore, the various default binding mechanisms described above will generally keep the global value even from being seen.

For example, if it is necessary to set *print-escape* to nil for some application, it is better for the application to set up its own binding of the variable in the application code itself, or if that is impossible, to have the code that starts the process wrap a binding of the variable around execution of the application with the mechanisms illustrated above. The worst way is to setq *print-escape* in a file. Processes that assume the documented Common Lisp standard simply might not work properly if *print-escape* is nil.

A dynamic binding - that is, a 'location' where a value is stored - cannot be shared between processes. Each process may store the same (i.e. eql) value in its binding, but if one process does a setf of *print-base* it cannot affect the value of the binding of *print-base* seen by another process. If that value is a Lisp object that can be modified, side effects to that object will obviously be seen by both processes. Of the variables listed above, those that typically hold obviously side-effectable objects are *package*, *read-table* and the several stream variables. Numbers (such as the value of *print-base*) and the boolean values t and nil (such as might be the value of *print-escape*) are not objects that can be side affected.

Unlike the standard common-lisp package special variables, it is quite reasonable to place in your file personal customizations for top-level Lisp listener variables documented in top-level.htm, such as *prompt*, that are not bound per process. No standard Common Lisp code should depend on these.

Since the multiprocessing system tries hard to insulate variable bindings between processes, the macro tpl:setq-default is provided to make it easier for a user to change the default value of some standard variable when that is what is really desired. It is intended primarily for inclusion in files.

setq-default is convenient for simple customization and suffices for simple environments. However, because it sets the global value of a symbol which is seen by all processes, it may not be appropriate in Lisp environments where the user may not have control over the needs of some processes. In such circumstances it may be preferable not to change a global symbol value in a file with tpl:setq-default. Instead, users may just push a new cons onto the *default-lisp-listener-bindings* alist. Such action will have much the same effect with regard to any code run inside a Lisp listener but will not affect non-listener processes.

6.0 Gates

A gate is an object with two states, open and closed. It is created with make-gate. Its state can be open (see open-gate) or closed (see close-gate) and can be tested with the gate-open-p function. A waiting process whose wait function is #'gate-open-p places much lower cpu demands on an os-threaded lisp than a waiting process that has a general wait function. That is because process-wait threading code recognizes this function (either the name or the function itself) specially and knows how to treat it much more efficiently that other arbitrary Lisp functions.

Therefore, it is often worthwhile to add gates to a program (expending the additional programming effort) to reduce the overhead of blocked processes, even though gates might not be logically necessary. The example just below illustrates this. Logically, the gate object is not necessary, in that the example also depends on process-locks and could be written more simply using just the process-lock. But the example is more efficient because it uses gates.

What is the difference between a gate and a lock?

A gate is either open or closed. Any process can open or close a gate at any time with no restrictions. Therefore, a process has no information about who opened or closed a gate or how long it will remain in that state. Gates are very light-weight and efficient, but require careful coordination among the processes which change and/or query the gate.

Locks implement specific guarantees about when and which processes may acquire or release a lock. A process-lock allows exactly one process to acquire the lock; the same process must (normally) release it. sharable-lock implements a more complex locking policy. Process locks are described in the section Section 8.0 Process locks in this document and sharable locks are described in Sharable locks in smp.htm.

Gate example

Here is an example of using a gate to control execution.

;; an instance of xqueue represents a server and its queue.
;; a gate is used to indicate that data needing to be handled
;; is on the queue.

(defstruct xqueue

;; anyone with access to an xqueue item can add to its work queue by calling
;; xqueue-add. After the data is added, the associated gate is opened
;; (an open gate indicates that data is waiting to be handled).

(defun xqueue-add (xq item)
   (mp:with-process-lock ((xqueue-lock xq))
     (setf (xqueue-data xq) (nconc (xqueue-data xq) (list item)))
     (mp:open-gate (xqueue-gate xq)))

;; create-xqueue-server starts a server process running and returns
;; the associated xqueue item to which work items can be queued.
;; The server process calls server-function on each element it retrieves
;; from the queue.  When server-function returns :exit, the server
;; process exits.
;; note that the main loop (the process that is set to 
;; the value of (xqueue-process xq)) waits on the gate being open
;; (indicating unhandled data is present) and closes the gate when
;; all available data is handled.

(defun create-xqueue-server (server-function &key (name "Server"))
  (let ((xq (make-xqueue)))
    (setf (xqueue-lock xq) (mp:make-process-lock)
	  (xqueue-gate xq) (mp:make-gate nil)
	  (xqueue-process xq)
	    #'(lambda (lxq sf)
		  (mp:process-wait "Waiting for data"
				   (xqueue-gate lxq))
		  (let (e run)
		    (mp:with-process-lock ((xqueue-lock lxq))
		      (if* (null (xqueue-data lxq))
			 then (mp:close-gate (xqueue-gate lxq))
			 else (setq e (pop (xqueue-data lxq)))
			      (setq run t)))
		    (when (and run (eq :exit (funcall sf e)))
	    xq server-function))

6.1 Semaphores

Each gate has an associated semaphore count, which is an integer starting at 0. The function get-semaphore decrements that value (if it is greater than 0, and when it becomes 0, it closes the associated gate. The function put-semaphore increments the count, opens the gate (if necessary), and releases just one of the threads waiting on that gate, if there are any. Semaphores can be more efficient at controlling gates when several threads are processing a work queue.

7.0 Queues

Conceptually, a queue is a first in, first out (FIFO) list. (Note that queues may not be implemented as lists, but details of its actual implementation are not necessary for understanding the concept.) Enqueuing an object is conceptually appending an object to the end of the list, dequeuing an object is conceptually equivalent to returning the car (first element of the list and replacing the list with its cdr).

Queues are often useful, and because certain Allegro CL features required queues, we decided to make the queue implementation public. The advantage of using queues over using lists (and appending to and popping from that list) are:

Queues are instances of the class queue. Create a queue with (make-instance 'mp:queue). Objects are added to queues with the enqueue generic function and removed from queues with the dequeue generic function. Queues have no intrinsic size limit.

Other queue functions include queue-empty-p and queue-length.

7.1 Sized queues

The basic queue has no restriction on size, so in any production environment queue size will need to be regulated or its growth could exhaust the amount of a available memory and crash your program.

To ensure that a queue cannot grow beyond an expected size we can customize the basic CLOS object to track its current size as well as denote the maximum number of items we'll allow. Here is the definition of our finite sized queue:

(defclass finite-queue (mp:queue)
  ((count :documentation "The current size of the queue."
          :initform 0
	  :accessor queue-count)
   (max-size :documentation "The maximum size of the queue."
             :initform *default-finite-queue-max-size*
	     :accessor queue-max-size))
  (:documentation "A mp:queue with finite size."))

(defvar *default-finite-queue-max-size* 100
  "The default maximum size for a finite-queue.")

In the event of the queue being full during an enqueue attempt, we'll create a queue-full condition:

(define-condition queue-full (simple-error)
  ((queue :initarg :queue
          :reader queue-full-queue))
  (:report (lambda (condition stream)
  	     (let ((queue (queue-full-queue condition)))
               (format stream 
               	       "~A is at it's maximum size of ~D."
                       (queue-max-size queue))))))

We then wrap the default methods for mp:queue to ensure that our finite-queue tracks and holds to its boundaries:

(defmethod mp:enqueue :around ((queue finite-queue) what)
   (declare (ignorable what))
   (when (>= (queue-count queue) (queue-max-size queue))
     (error 'queue-full :queue queue))
   (incf-atomic (slot-value queue 'count) 1)

(defmethod mp:dequeue :around ((queue finite-queue) &key wait empty-queue-results)
  (declare (ignorable wait empty-queue-results))
  (let ((item (call-next-method)))
    (decf-atomic (slot-value queue 'count) 1)

Notice the use of incf-atomic/decf-atomic in the above mp:enqueue method. These ensure that simultaneous increments and decrements from multiple threads will all be accounted for. But this doesn't eliminate all possible concurrency effects. Since there is no locking around the check of the max-size or the current count of the queue there is a small opportunity for these values to change between when they are checked and when the next method is called.

Depending on how sensitive our application is to the number of items on the queue this technique might be satisfactory and even have performance benefits over the strict locking technique. The other positives of the non-locking technique is that once the enqueuing finishes we still have an accurate queue-count and the queue size will never grow larger than a known amount calculable by this formula:

  maximum queue size + number of enqueuing threads - 1 = adjusted maximum

For many cases this type of queue limiting is sufficient. In the case that an absolute maximum must be maintained or if the maximum number of enqueuing threads is unknown, we can provide stricter locking that comes at the performance cost of acquiring and holding a lock. Here is an example of such a technique:

  (defmethod mp:enqueue :around ((queue finite-queue) what)
    (declare (ignorable what))
    (let ((queue-full-p nil))
      (mp:with-process-lock ((mp:queue-lock queue))
        (if (>= (queue-count queue) (queue-max-size queue))
            (setf queue-full-p t)
            (incf (queue-count queue))))
      (if queue-full-p
          (error 'queue-full :queue queue)

The lock is acquired to read, compare, and update the size of the queue. This prevents inaccuracies in the reported size of the queue and ensures that the queue stays within its designated size.

In all cases, programming in a multiprocessing environment, and particularly in an SMP environment, is an exercise in matching a design choice that meets program requirements, thread safety, and satisfactory performance. We hope this helps introduce you to the mp:queue object, how to customize it, and to introduce you to loose and strict locking techniques.

8.0 Process locks

A process-lock is a defstruct which provides a mechanism for interlocking process execution. Lock objects are created with make-process-lock. A process-lock is either free or it is seized by exactly one process. When a lock is seized, a non-nil value is stored in the lock object (in the slot named locker). Usually this is the process which seized the lock, but can be any Lisp object other than nil. Any process which tries to seize the lock before it is released will block. This includes the process which has seized the lock; the with-process-lock macro protects against such recursion.

The locker slot (accessed with process-lock-locker) indicates whether the lock is seized or free. If the value of the locker slot is nil, the lock is free. If it is non-nil, the lock is seized. Both process-lock and process-unlock take a lock-value first optional argument. This argument defaults to the current process (the value of *current-process*). This value, whatever it is, is stored in the locker slot of the lock object by process-lock. When process-unlock is called, the lock-value argument is compared to the value in the locker slot. If the values are the same, the lock is unlocked. If the values are different, process-unlock signals an error.

The normal operation is as follows. Suppose that a lock object has been created and is the value of the variable my-lock:

(setq my-lock (mp:make-process-lock :name "my-lock")

Suppose as well that at least two processes are running, the FOO process and the BAR process. When the FOO process is current, process-lock is called on my-lock:

(mp:process-lock my-lock)

Now the value in the locker slot is #<FOO process>. Now process-unlock is called in the FOO process

(mp:process-unlock my-lock)

The #<FOO process> is passed as the value of the lock-value optional argument, and, since it matches the value in the locker slot, the lock is unlocked and the value of the locker slot is set to nil.

Meanwhile, suppose in process BAR, process-lock is called:

(mp:process-lock my-lock)

If this call occurs while the lock is seized by the FOO process, the BAR process blocks (waits) until the lock is freed and then itself seizes the lock (we assume no other process is waiting for the lock). As soon as the FOO process gives up the lock, the call to process-lock in the BAR process completes, with the value in the locker slot being set to #<BAR process>.

Both process-lock and the without-scheduling macro protect a segment of code from interleaving execution with other processes. Neither has significant execution overhead, although without-scheduling is somewhat more efficient. However, the mechanisms have different ranges of applicability. A process-lock blocks only those other processes which request the same lock; without-scheduling unconditionally blocks all other processes, even those completely unrelated to the operation being protected. This might include high-priority processes that need to field interrupts with low latency. Therefore, the without-scheduling macro should not be used around a code body that might require significant time to execute.

The descriptions below provide only a brief introduction. Please follow the links to the individual description pages for details.

Name Arguments Notes
make-process-lock &key name This function creates a new lock object. The value of the :name keyword argument should be a string which is used for documentation and in the whostate of processes waiting for the lock. (There are additional keyword argument for internal use not listed. They should not be set by user code.)
process-lock lock &optional lock-value whostate timeout This function seizes lock with the value lock-value (which must be non-nil).
process-unlock lock &optional lock-value This function unlocks lock, setting the value in the locker slot to nil. The value of the locker slot of the lock must be the same as the lock-value argument. If it is not, an error is signaled.
process-lock-locker lock This function returns the value of the locker slot of lock. This value is usually the process holding the lock, but can be any Lisp value. If the value is nil, the lock is not locked.
process-lock-p object Returns true if object is a lock (as returned by make-process-lock) and returns nil otherwise.
with-process-lock (lock &key norecursive) &body body This macro executes the body with lock seized.

9.0 cl:sleep and minimum sleeping time

While not strictly a multiprocessing issue, cl:sleep functionality is actually implemented as part of the multiprocessing (though it is in a module always present in a running Lisp).

A call to cl:sleep is in fact effective a call to mp:process-sleep (with the whostate argument set to "sleeping").cl:sleep requires a non-negative argument. It will error when passed a negative argument.

As to how short a time one can sleep, that is the minimum effective argument to sleep is an internal parameter in Allegro CL. For a variety of reasons, mostly historic, the initial value is 75 (meaning 75 milliseconds). To allow for shorter waiting times, you can change the internal value. This form:

     (sys::thread-control :clock-event-delta)

returns the value the system is currently using, as an integer number of milliseconds. (Note that sys::thread-control is an internal function and is not further documented. It should not be used for any purpose other than determining or changing the minimum sleep time.) The form is a setf-able location. To make the delta 10 milliseconds, evaluate this:

     (setf (sys::thread-control :clock-event-delta) 10)

Reasonable values range from 0 to 75. The value is not saved in a dumplisp image. Each fresh invocation of Lisp must reset it if something other than the default value is desired.

10.0 OS error code 0xc on Windows

You may see an error on Windows:

   Error: Attempt to start thread gave OS error code 0xc.

This error is caused when Windows cannot allocate the memory to create a new thread for a Lisp process because, usually, there is not enough room for the thread's stack. This might be mitigated by reducing the default thread stack size (see the Windows documentation for information on the default stack size).

11.0 A simple example of multiprocessing

The example below can be loaded into Lisp either compiled or interpreted. It shows simple use of mp:process-run-function and mp:with-process-lock. Three parallel processes compute the number of trailing zeroes in factorial N for different ranges of N.

The results are printed to a stream which is in the default designated by t, which denotes *standard-output*. When using the IDE, in processes outside the Listener (Debug Window), *standard-output* is typically the console on Windows and the shell that started Allegro CL and the IDE on Linux and the Mac. You must display the console or shell to see the output. On Windows, you can show the console (which is often hidden) by evaluating (show-console).

Or you can also open and editor pane, use Tools | Get Component to set the value of a variable to that pane and use that as the value of stream in the call to moby-process-test, in which case the output will go to that pane.

(in-package :cl-user)  
;; in the IDE (in-package :cg-user)
(require :process)
(defun factorial (n) (if (< n 2) n (* n (factorial (1- n)))))
;; This lock is used to prevent output interleaving.
(defvar moby-output-lock (mp:make-process-lock))
;; Print to the stream the number of trailing
;; zeros in (factorial n) from n=from up to n=to.
;; This is a *very* inefficient way to do this computation,
;; but the point is to make it run slow enough to see.
(defun process-test (stream from to)
  (do ((n from (1+ n))) 
      ((>= n to)) 
    (do ((x (factorial n) (/ x 10)) 
         (zeros -1 (1+ zeros))) 
        ((not (integerp x)) 
         (mp:with-process-lock (moby-output-lock) 
            (format stream "factorial(~d) has ~d trailing zeros~%"
                    n zeros)))))) 
;; This starts three processes in parallel. 
;; The original Lisp listener returns immediately, 
;; and will accept types forms while the other processes run. 
(defun moby-process-test (&optional (stream t)) 
  (mp:process-run-function "Test 1" #'process-test stream 400 440)
  (mp:process-run-function "Test 2" #'process-test stream 440 470)
  (mp:process-run-function "Test 3" #'process-test stream 470 500) 
;; Make sure factorial itself is compiled 
;; because large factorials exceed the interpreter's stack. 
(unless (compiled-function-p #'factorial) (compile 'factorial)) 
(format t "Type (moby-process-test) to test multi-processing.~%") 

Appendix A: Wide binding

The value of a variable depends on its current binding. In a multiprocessing environment, the current binding can be different in different processes. Programmers must be aware of the fact that they must establish bindings in different processes for variables for which different processes require different values. For example, a process printing a value to a file for debugging purposes may want *print-length* and *print-level* both to be nil so that all the information is printed, but a listener process may want those variables to be small integers so that the user is not overwhelmed with unneeded data. Without some form of per-process binding, the process doing the printing to a file could mess up the process managing the listener.

While the need for binding is important for programmers to understand, the multiprocessing binding implementation in theory is not. So long as it works as described, implementation details are at best simply distracting, and at worst actually misleading. (They can be misleading because a programmer might try to take advantage of details of the implementation that are either misunderstood or once properly understood but since modified, perhaps by a patch or a new release. For example, in this section, we talk of the virtual vector of bindings associated with the value of a symbol. It would be a mistake to try to get a handle on this vector and manipulate it directly instead of using a documented tool such as symeval-in-process.)

However, certain features of the binding model are visible to programmers and users, such as the bind index (which we describe below). In order to understand these features, it is necessary to understand some of the multiprocessing binding model. And, so long as the information is not used to get around proper ways of doing things, general understanding of the model is probably desirable.

There are various binding models that can be used. Broadly, there is shallow binding, where the value slot contains the current binding in some form, and deep binding, where a process's binding stack is searched for a symbol's value.

Releases of Allegro CL prior to 6.2 used a narrow form of the standard shallow binding model: the value slot of a symbol held the current binding of the current process. (The global value was held in a separate slot.) Each process had an associated bindstack with bindings recorded. Switching processes required unwinding out the current set of bindings and winding in the newly current processes bindings. This made accessing the current value in a process fast, but made both process switching and accessing the binding in another process slow.

Allegro CL now uses a wide variant of shallow binding. A symbol has a logically wide value cell; it is conceptually a vector of values, one for each process. Each process's bindstack has an index by which the symbol's value vector is accessed, so that every process has immediate access to its own set of current bindings. Because of the way the conceptual value-vector is implemented, the bindstacks need not be unwound and rewound when the current process changes.

One consequence of the new model is symbol-value access for wide binding is slightly slower than for narrow binding; an extra indirection is needed into the value vector in order to access a symbol's value. However, since wide binding is still a shallow-binding technique, it is still much faster than deep binding.

The advantages to wide binding over narrow binding are:

We wish to emphasize again that the details of the vector of values implementation are internal. Programs should not try to access or modify the vector of values associated with a symbol.

Appendix A.1 The bindstack index

Under the normal multiprocessing model, each process has one implementation structure. Each of these implementation structures normally have one bindstack. When a bindstack is first allocated, it is given an index not used by any other bindstack in the system (including dead bindstacks which have however not been garbage-collected). This index serves as the index into each symbol's value-vector, also called the bnp (Bindstack Name Pointer) in some situations. When a process has had a bindstack allocated for it, the bindstack's index is shown by the process's print-object method. Note that this index cannot be considered an identifier; if a process dies and is gc'd, another process will eventually take on the same index. However, the index can be useful in distinguishing between currently running processes, especially if these processes have been given the same name.

(More on the bindstack index name: bnp or Bindstack Name Pointer is an archaic name for the bindstack index, also called Bix.)

A bindstack index will be seen in several places:

A full sample run of tracing in a multiprocessing context is given below. This example is only for demonstration of the wide-binding concepts and has no other useful purpose. In this example, a client starts up a server and then writes a buffer-full to it, after which it waits to read the data back again. The server reads the buffer from the client and then writes it back again, after which it goes into a break loop (so that the process does not die). The client is then able to read the buffer sent by the client, and prints it and enters a break loop itself.

Note that after the first two nonrelated device-read entry/exit pairs, the traced device-read call appears to have been entered twice without the first of these two exitting. However, with the aid of the :processes command, we can see that the first entry is on behalf of the "run-client" process, which has bindstack-index 3, and the second device-read entry is for bindstack-index 5, or the "server" process. It may be hard to see the second return from device-read (although it actually corresponds to the first device-read entry of the pair), but it does appear after the server enters its break loop.

cl-user(1): (shell "cat")
(in-package :user)

(eval-when (compile load eval)
  (require :sock))

(defun start-server (buffer)
  (let ((sock (socket:make-socket :connect :passive)))
     "server" #'server-get-connection sock buffer)
    (socket:local-port sock)))

(defun server-get-connection (sock buffer)
  (let ((stm (socket:accept-connection sock)))
    (close sock) ;; don't need it anymore
	  (read-in-buffer stm buffer)
	  (write-out-buffer stm buffer))
      (close stm))
    (break "server breaks")))

(defun run-client (port buffer)
  (let ((stm (socket:make-socket :remote-host "localhost"
				 :remote-port port)))
	  (dotimes (i (length buffer))
	    (setf (aref buffer i) (mod i 256)))
	  (write-out-buffer stm buffer)
	  (socket:shutdown stm :direction :output)
	  (read-in-buffer stm buffer)
      (close stm))))

(defun write-out-buffer (stm buffer)
  (dotimes (i (length buffer))
    (write-byte (aref buffer i) stm)))

(defun read-in-buffer (stm buffer)
  (dotimes (i (length buffer))
    (setf (aref buffer i)
      (read-byte stm))))

(defun socket-run (&optional (message-length 10))
  (let ((client-buffer
	 (make-array message-length :element-type '(unsigned-byte 8)))
	 (make-array message-length :element-type '(unsigned-byte 8))))
    (run-client (start-server server-buffer) client-buffer)
    (format t "~s~%" client-buffer)
    (break "client breaks")))
cl-user(2): :cl socket-trace
; Fast loading [...]/socket-trace.fasl
cl-user(3): (require :acldns)
; Fast loading [...]/code/acldns.fasl
cl-user(4): (trace (device-read :inside (or server-get-connection run-client) :not-inside break))
cl-user(5): (mp:process-run-function "run-client" #'socket-run)
#<multiprocessing:process run-client @ #x71b375fa>
 0[3]: (device-read #<file-simple-stream
                      #p"/etc/resolv.conf" for input pos 0 @
                    nil 0 nil t)
 0[3]: returned 109
 0[3]: (device-read #<file-simple-stream
                      #p"/etc/resolv.conf" for input pos 109 @
                    nil 0 nil t)
 0[3]: returned -1
 0[3]: (device-read #<multivalent stream socket connected from
                      localhost/4137 to localhost/4136 @ #x71b3f912>
                    nil 0 nil t)
 0[5]: (device-read #<multivalent stream socket connected from
                      localhost/4136 to localhost/4137 @ #x71b44e42>
                    nil 0 nil t)
 0[5]: returned 10
Break: server breaks

Restart actions (select using :continue):
 0: return from break.
 1: Abort entirely from this process.
[Current process: server]
[1c] cl-user(1): 
 0[3]: returned 10
#(0 1 2 3 4 5 6 7 8 9)
Break: client breaks

Restart actions (select using :continue):
 0: return from break.
 1: Abort entirely from this process.
[Current process: run-client]
[1c] cl-user(1): :pro
P Bix Dis   Sec  dSec  Priority  State   Process Name, Whostate, Arrest
*   5   1     0   0.0         0 waiting  server, waiting for input
*   3   3     0   0.0         0 waiting  run-client, waiting for input
*   4   1     0   0.0         0 waiting  Domain Name Server Client,
                                           waiting for input
*   1   2     0   0.0         0 runnable Initial Lisp Listener

Appendix B: Processes and the runtime analyzer

The time runtime analyzer in Allegro CL (documented in runtime-analyzer.htm) collects data by periodically recording which functions are on the stack. The space runtime analyzer records all requests for new allocations. It is possible to set a flag in a process object that prevents sampling when that process is the current process. This permits more accurate data for time spent or space used by the process and code of interest.

When a new process is created, the value of the flag is set to its value in the current process. The predicate profile-process-p polls the profiling flag and can be used with setf to change its value.

Copyright (c) 1998-2022, Franz Inc. Lafayette, CA., USA. All rights reserved.
This page was not revised from the 10.0 page.
Created 2019.8.20.

Allegro CL version 10.1
Unrevised from 10.0 to 10.1.
10.0 version