| Allegro CL version 9.0 Unrevised from 8.2 to 9.0. 8.2 version |
This document contains the following sections:
1.0 Runtime analyzer IntroductionThe runtime analyzer is a tool to make Lisp programs more efficient. Runtime analyzer information tells you which parts of the program run slowly or use a lot of space (thus increasing garbage collection). Once you know this, you can improve efficiency by optimizing (see compiling.htm) or by recoding.
The Runtime Analyzer works on all platforms. (Problems with the Runtime Analyzer on AMD 64 processors reported in earlier versions of this document, have been resolved. That processor is used by 64-bit Allegro CL on Linux.) Note the runtime analyzer produces profiles of running code. We generally use the terms runtime analysis and profile interchangeably and many symbols associated with the runtime analyzer have profile in their names.
Allegro CL provides three runtime analysis tools for assisting users in determining where time is spent or space is used. Most powerful are the space and time profilers which statistically sample a running program to determine which functions use the most time or which use the most space. The call-counting profiler keeps a count of the number of times a compiled function is actually called while a program is running.
Allegro Composer extends the space and time analyzers by providing a graphical display of the results as well as the ability to start and stop the profilers graphically. Without Allegro Composer, all the same information is gathered and reported but the information is printed to the terminal.
The Allegro CL Integrated Development Environment in Windows also provides a graphical display of runtime analyzer data and various menu commands for starting and stopping analysis. See the description on the Runtime Analyzer Results dialog in the IDE help file. The IDE additionally provides a Runtime Analyzer Control dialog which provides graphic tools for starting, suspending, and stopping profiles and for controlling what is profiled. The Runtime Analysis Graph dialog is described with the Runtime Analyzer Results dialog
We describe the three types of runtime analyzer below. Later sections describe how to use the runtime analyzer and how to interpret the results. Please note the following:
The CPU time analyzer collects data by interrupting execution of the
Lisp periodically and recording the functions on the execution
stack. The variable *granularity*
controls how often
execution is interrupted. Its value is the integer number of machine
clock ticks between samples.
A machine clock tick occurs approximately every 0.01 seconds (the exact value is machine dependent). In that time, the machine will typically execute tens of thousands of machine instructions. The statistical nature of the time analyzer should therefore be evident. However, under reasonable assumptions concerning lack of interaction between execution of your program and machine clock ticks and with sufficient samples, the information gathered by the sampling process will closely mirror the actual behavior of the code which is executed.
The `reasonable assumptions' state that (1) there is no cycle in the program harmonic with the program interrupting time so that the interrupting times do provide a random sample of program execution and (2) if your program does several different things, enough time is spent on each to provide statistically significant results. Several thousand samples are statistically sufficient. Of course, functions where your code spends very little time may be missed altogether by the time analyzer. However, the purpose of the runtime analyzer is to assist you in making your code run faster, and functions which use little time are not important in achieving that result.
The space analyzer tries to identify the functions which use the most space. Like the time analyzer, the space analyzer is statistical but the method for sampling is quite different.
The time analyzer stops execution regularly and examines the stack. The space analyzer stops execution whenever a page of memory is allocated. The statistical nature of the space analyzer arises from the fact that the function which calls for a page of memory will not (typically) use it all. After the function which called for the page has completed, there may be sufficient room left on the page for the space requirements of the next function. Therefore, only the first function is charged with needing space even though both functions used the space.
The assumption of the space analyzer is that the requirement for new pages of memory occurs randomly and in proportion to the amount of space actually required by a function. So given a reasonably large sample, the functions that use large amounts of space will have proportionally large amounts of space attributed to them by the analyzer.
In Allegro CL, memory is organized into three kinds of objects: symbols, conses, and other. Both symbols and conses have a specific size and so a known number can fit on a page. Other Lisp objects require variable amounts of memory (depending on what information they contain). All objects other than symbols and conses are stored on other pages.
The space analyzer thus records information whenever one of the functions which allocate a new page of memory is requested. The request can come from the internal function _page_from_new (for symbols and conses) or from the function new_other (for other objects). These functions are deep in the system and are not typically user visible. On machines where this is possible, during analysis, hooks associated with those two functions stop execution when they are called and record information about what is on the stack. Note that because of the way stack linking is done on certain machines, you may not see the actual _page_from_new calls, but rather Lisp functions (such as cons) which trigger the calls. These analyses are harder to interpret, but the stack linking requirements make it impossible to list these calls with more accuracy. (The examples in the document are from platforms that do show _page_from_new.)
Note that the time macro also provides information on space used by the form evaluated. This information is quite accurate (in some cases, the total space used is more accurately reported by the time macro than by the space analyzer). Time does not provide a breakdown of what within a form caused space usage and that is what the space analyzer is designed to do.
The call-counting profiler is integrated with the space and time
analyzers. No special compilation is necessary for calls to a function
to be counted, but you must tell the system to keep counts by
specifying true values for the :count or
:count-list keyword arguments when the runtime
analyzer is started. (However, see verify-funcalls-switch
. funcall'ed functions in certain cases
in compiled code will not be counted if the call was compiled with
that compiler switch false.) Any function can be call-counted. The
count is exact, not statistical. The only `calls' that will be missed
are where a function is compiled inline and the inline code is
executed, and, on certain architectures, calls triggered by
funcall. Note that the overhead for counting calls can be significant,
and that overhead is recorded by the time analyzer. There is no space
overhead, so the space analyzer results are unaffected. Therefore, the
reported time usage is skewed by the call-counting overhead when a
time analysis also does call-counting. The time results are still
useful, but must be interpreted with care and a time-only analysis
should not be compared with a time and call-counting analysis.
Unless otherwise indicated, the symbols described in this section are in the profiler (nicknamed prof) package.
In order to do a runtime analysis, you need to start it and stop it. You
can start it with start-profiler and stop it with
stop-profiler or you
can use the macro with-profiling to cause the runtime analyzer
to be run while a specific body of code is executed. The variable
*maxsamples*
controls the maximum number of samples that will be taken. Data collection
will stop when that number of samples has been recorded even if
stop-profiler is not
called or the code specified to with-profiling has not completed.
Note that it is important to distinguish between starting and stopping data collection and suspending data collection. start-profiler and stop-profiler are the beginning and end of a data collection run. No data is collected before start-profiler is called and none can be collected after stop-profiler is called. Temporary suspension of data collection is done with the functions start-sampling and stop-sampling. On Windows using the Integrated Development Environment, commands on the Run menu can be used to start and stop data collection and sampling.
The macro with-profiling can be used to collect data on the evaluation of specific forms. The macro with-sampling similarly can be used to sample only as specific forms are evaluated.
Please note the following about time and space data collection.
prof::*sizefile-template*
and
prof::*samplefile-template*
, whose values are either
"/usr/tmp/sizes" and "/usr/tmp/sample" or
"/tmp/sizes" and "/tmp/sample". (The actual data
files have filenames longer than sizes and sample.) You may change the
values of these variables if the wrong or an inappropriate temporary
directory is specified. The values must be strings like
"<my-tmp-dir>/sizes" and
"<my-tmp-dir>/sample" where <my-tmp-dir> is the
absolute path of the directory you want to use.
You may see this message when collecting data. This section explains what it means. The reading of the C symbol table into lisp form has been enhanced to not only include symbols from the executable itself (which has itself shrunk to a very small size) but also any shared-libraries that the lisp uses, including the heart of the lisp itself. The function list-all-foreign-libraries lists those libraries whose symbols Allegro attempts to read in.
Sometimes, the shared-libraries do not have the necessary headers and sections that are needed for proper building of a C symbol table in lisp. In these cases, a profiler-error is given, for example:
USER(2): (prof:show-flat-profile) profiler error: couldn't find .text section profiler error: couldn't find .text section
In this case (on a Sparc Solaris 2.6) two libraries did not have .text sections, so no symbols are recorded for those two libraries.
The failure to find symbols is not fatal. The debugging tools that use the libraries are simply not able to give as complete information as if the symbols had been available. The same soft failure occurs when a symbol in a .so had been declared static (or in the case of Windows, any C symbol not explicitly DllExport-ed). The exact failure mode depends on the arrangement of the functions in memory; a function at one address whose symbol was not properly recorded will appear to be part of the next-lower-addressed function whose name was recorded.
The status of the data collecting can be determined at any time using the function profiler-status. Called with no arguments, it returns the status of the current profile. The status can be one of the following:
:inactive
-- data collection has not yet started:suspended
-- data collection has started, but sampling is currently not being done:sampling
-- data collection has started and sampling is occurring:saved
-- data collection has been stopped, but as yet no analysis of saved data has occurred:analyzed
-- all data has been analyzed
A finalized analysis (profiler-status returns
:analyzed
) can be saved using the macro save-current-profile. It takes a variable
name (i.e. a symbol) as an argument and the profile data is made the
value of that variable.
Saving an analysis with save-current-profile allows you to
analyze data at your leisure, running other analyses in the
meantime. The keyword argument :current-profile
to show-call-graph
and show-flat-profile
can take a saved profile instead of the default (which is the
currently running or last run analysis) and display results using that
data.
Fairly fine control is provided over the collection of data while
the data is being collected. Sampling can be started and stopped with
the functions start-sampling
and stop-sampling. The
macro with-sampling
can be used to sample just while specified forms are evaluated. Note
that data collection started with the function start-profiler may start sampling
immediately or may delay sampling (depending on the value of the
keyword :start-sampling-p
).
The variables *maxsamples*
and *granularity*
also control the behavior of
the runtime analyzer. The first puts a limit on the number of samples
collected. Its initial value is 100,000.
The second specifies the time
interval between time samples. The units are clock ticks, an internal
value usually about 1/100 of a second (during which time thousands of
machine instructions are executed). The space analyzer, which is not
statistical in the same way, ignores the value
of *granularity*
.
When data is being collected, everything done in Lisp is sampled, including the runtime analyzer itself. The runtime analyzer overhead (for either space or time) is small. The runtime analyzer can be turned on and off (that is suspended and resumed) with the functions start-sampling and stop-sampling.
After data collection completes, the data is stored within Lisp. If you wish to analyze the data, you must do so before sampling for another analysis is started. (You can, however, begin an analysis of the collected data and save it for later detailed analysis). If after data is collected, collecting data for a new runtime analysis is started, the data collected by the previous runtime analysis will be lost.
As an example of how to use these functions, we show how to collect data on a single function foo that never calls itself embedded in a larger piece of code. Suppose we want to analyze foo alone, the steps are as follows:
1. Fwrap the function foo with the following form. (Fwrapping code is described in fwrappers-and-advice.htm.)
(def-fwrapper timer-wrap (&rest args) (prog2 (prof:start-sampling) (call-next-fwrapper) (prof:stop-sampling))) (fwrap 'foo 'timer-fw1 'timer-wrap)2. Start the runtime analyzer with start-profiler with
nil
as the value of the keyword:start-sampling-p
. This prepares the runtime analyzer to collect data without actually beginning sampling.3. Run the code containing calls to foo.
4. Stop data collection when done.
It should be clear why foo cannot call itself: the fwrapper code placed around foo will cause data collection to stop when the embedded call to foo returns even though the original call is still being evaluated. If a function does call itself (or calls another function which calls it back), use the following code:
(defparameter *prof-depth* 0) (def-fwrapper timer-wrap-2 (&rest args) (prog2 (progn (incf *prof-depth*) (if (= 1 *prof-depth*) (prof:start-sampling))) (call-next-fwrapper) (decf *prof-depth*) (if (zerop *prof-depth*) (prof:stop-sampling)))) (fwrap 'foo 'timer-fw2 'timer-wrap-2)
Collecting data on more than one function is more complex, because of embedded calls. Unless you are sure that the functions you are interested in never call each other, you must use global variables as flags or semaphores as above to be sure that data collection is done exactly when you want.
This section describes the interaction between the runtime analyzer and the multiprocessing capabilities of Allegro Common Lisp. The multiprocessing facility is described in multiprocessing.htm. The runtime analyzer has the ability to collect data on all processes or just a selected set.
Data collection on processes can be controlled on a process by process
basis with the function mp:profile-process-p which returns
the value of the profile-process-p flag of an argument
process object. If that flag is nil
, samples
are not collected when the process is running. Setf can be used to
change the value of that flag on a process.
The operators start-profiler and with-profiling accept the keyword argument
:except-processes
(on platforms that use OS threads,
Windows only in 7.0) or :except-stackgroups
(non-OS-threads paltforms) which allows another form of control over
which processes have data collected on them. By giving a list of
processes as the value of that argument, you can prohibit the runtime
analyzer from collecting samples for the given processes. The Process
Browser in Allegro Composer (on UNIX) provides a way to set this flag
with the mouse.
The runtime-analyzer has been enhanced to separate closure
calls. start-profiler
and with-profiling have
a new keyword called :interpret-closures
. When
:interpret-closures
is nil
(the
default) the closures are not saved and analyzed separately so the
behavior is the same as in earlier releases.When
true, closures are saved and analyzed as separate
entries, even though they may use the same template
function.
A function spec called closure
has been added
(see implementation.htm for
information on function specs). This is valid whenever analyzed
data exists and identifies the closures which the runtime analyzer has
seen. The function spec is (prof:closure n)
where n is a small integer enumerating the closures in the
data. This allows the user to grab the closure object and to do a
disassemble-profile
on
it without having to know what symbol holds its
fdefinition
(which is sometimes very hard).
When flat analyses and call-graphs are reported, closures are
identified by (prof:closure n)
, followed by the
printed representation of the closure object itself.
Space and time analysis results can be displayed graphically if you are running Allegro Composer (using an X window) but can only be printed to the terminal if you are not. Graphical display on UNIX is described in The window interface to the runtime analyzer of the Allegro Composer User Guide. Graphical display on Windows uses the Runtime Analyzer Results dialog. The system will display space or time data automatically according to what kind of data was collected.
See also Section 6.2.1 The disassemble-profile function for how to display profile data in a disassembly.
Note that you should not collect data for additional analyses once you have started to display results on the terminal. Only do another runtime analyzer run when your analysis is complete. Note too that you can save the results with save-current-profile.
You can look at the data in two ways: a flat analysis and a call-graph. A flat analysis shows the time or space information sorted by function from largest consumer of time or space to least consumer.
The call-graph format shows information about the callers and callees of each function grouped with the information about the function.
The two functions show-call-counts and list-call-counts report call-count results. The first prints the results and the second returns them in a list. (See the definitions of the functions for information on specifying an analysis other than the current one and controlling the number of functions about which information is printed.)
The function show-flat-profile prints a flat analysis to
*standard-output*
(or another stream if you desire). It
shows how much time is spent or space is used by functions calls when
code is executed. There is an example of a flat analysis output
below.
The function show-call-graph allows you to examine which functions call and are called by other functions, and how time spent or space usage is distributed among the callees and callers. Here is how it works.
For each function, F, a call-graph shows the functions that called it (parents of F) and the functions that it called (children of F). Roughly speaking, a call-graph is the flat analysis display expanded to show parents and children. The display for function F is surrounded by dashed lines. F's display shows first information about the parents of F, then information about F itself, then information about F's children. (See the example later in this section.)
The following algorithm is used to find the parents of a function F. Each sample is examined and if F appears in the call stack, the function calling F is recorded as the parent. If F appears more than once in the call stack, all the different parents (callers) are recorded, but if the same caller calls F several times in the stack, only the oldest instance is recorded. The time spent in (or the space used by) F and its descendents due to calls by a particular parent is proportional to the ratio of the number of samples where this particular parent of F was found to the total number of samples.
The algorithm for finding children is as follows. Each sample is examined and if F is in the call stack, the function it calls is recorded as its child. If F appears more than once in the stack, each different child is recorded but if the same child appears more than once, only the oldest instance is recorded. The time spent in (or the space used by) a particular child and its descendents due to calls by F is proportional to the ratio of the number of samples where this particular child of the function F was found to the number of samples.
If there is a recursive entry in a call-graph, then the local % entry for that function will have an r after it. If there is a recursive entry, then time attributed to this entry may be counted twice because of recursion. For example, if bar calls baz calls foo calls baz, then foo and bar are parents of baz, and there will be two entries for parents of baz, corresponding to only 1 clock tick (sample). So the report shows two lines (one for foo and one for bar), but the time is double counted. Similarly for a child case: bar calls foo calls bar calls baz. Then foo and baz are both children of bar (reported on 2 separate lines in the report) corresponding to only 1 sample.
The output of a flat analysis and particularly of a call-graph may
be voluminous. In order to reduce the amount of data to a manageable
amount, the default display of either a flat analysis or a call-graph
suppresses some internal functions and packages. When a package is
hidden, functions named by symbols in the package are not
displayed. The functions and packages suppressed are in the lists
*hidden-functions*
and *hidden-packages*
,
respectively. You can suppress display of extra functions by adding
them to the list that is the value of *hidden-functions*. You can
suppress the display of extra packages (that is, of functions named by
symbols in the packages) by adding them to the list that is the value
of *hidden-packages*
.
When functions are suppressed from the display, the runtime analyzer attributes any effects these functions might have had to the function that called them. So, assuming the calling function is itself not suppressed from the display, the calling function in the runtime analyzer display can be thought of as a kind of composite, made up of itself and the suppressed function(s) it called. Functions which the runtime analyzer records as calling hidden functions are indicated in the runtime analyzer display by prepending "..." to the function name.
In a call graph all directly called undisplayed functions are attributed to the caller. Such undisplayed functions may eventually directly or indirectly call displayed functions, which will be shown in the call graph as children of the original caller. I.e., if foo calls bar calls baz, and bar is suppressed, then foo will be displayed as "... foo" and baz will be shown as a child of foo. foo gets dots because it can be thought of as a composite of foo and bar (as we explained above).
Sometimes a called function and all its descendants may be suppressed (e.g., none of them are displayable), in which case the runtime analyzer display just uses "..." to indicate that there are completely suppressed cases.
You can further control the amount of analysis you see with the variables *significance-threshold*
and *fractional-significance-threshold*
.
The variable *hidden-space-functions*
lists
functions (mostly kernel functions named with strings rather than
symbols) whose suppression makes the space analyzer output more
understandable. The functions mostly actually allocate space requested
(directly or implicitly) by other functions presumably of more
interest.
Note several things about the display control variables we have just discussed.
*hidden-packages*
, *hidden-functions*
, and *hidden-space-functions*
all to nil and setting both *significance-threshold*
and *fractional-significance-threshold*
to 0.0
results in all available detail from the run being displayed.
You can always reset the value of the five display control
variables,
*hidden-packages*
, *hidden-functions*
, *hidden-space-functions*
, *significance-threshold*
, and *fractional-significance-threshold*
,
with the function reset-profiler-display-defaults.
The function disassemble-profile will print a disassembly of a specific function indicating the estimated time spent for each instruction. Calls to other functions are identified as jumps to subroutines (using hardware specific terminology). We remind you that the time data is statistical and so the time spent in particular instructions are estimates. The less time spent in a function overall, the worse the estimate.
The runtime analyzer emits headers similar to the following (the numbers are different of course) in the show-flat-profile or show-call-graph output:
Sampling stopped after 1181 samples taken for the time profiler. Sample represents 1.2 seconds of user+system CPU time (out of a total of 5.5).
Sampling is done on the basis of accumulated processor time in threads known to the Allegro CL implementation. The value "1181" is the number of times a Lisp thread accumulated sufficient processor time to qualify as a sample. The value "1.2" is a total sampling time calculated by multiplying the number of samples by the sampling granularity. The value "5.5" is the total user and system time used by the Lisp process during the sampling period. When this value is the same as the total sampling time, it is not included in the message.
The total user+system time can be larger than the total sampling time when multiple processors are available. In Virtual Threads Lisps (ie non-os-threads, non-SMP) only one Lisp thread may be running at any moment, but there may be system threads and foreign threads running and accumulating time in parallel with the Lisp thread. In SMP Lisps the total user+system time will often be larger than the total sampling time since several Lisp threads can run in parallel.
Note that when a Lisp thread is waiting, it accumulates no time at all and thus does not appear in the profiler output.
When the profiler's low-level mechanism sees a function on the stack, it records the fact that it's there, but it doesn't use the address of the funtion to identify it; instead it uses a hash code that has been generated when the function object was created. Every function object gets a hash code from a single counter when the object is created. Because of this, uniqueness is mostly guaranteed, except when the counter wraps around (and this is rare because of the sheer size of the function's hash code field).
When it comes time to identify the function whose hash code was
recorded, however, that function may have disappeared, i.e. the
garbage collector may have noticed that there are no references to
that function object and thus collects the object. In response the
profiler creates an ephemeral function, which serves as a
placeholder for the real object that disappeared. The name of the
ephemeral object is (:internal <ephemeral-name>
<hash-code>)
where <ephemeral-name>
is literally
prof::<ephemeral>
and <hash-code>
is the hash code that was
recorded with the function object.
It my be hard to figure out which function went away, but the fact that hash-codes are assigned incrementally might help the user to figure out what function was removed, based on other functions allocated at around the same time.
This section gives examples of using the runtime analyzer. These examples were run using Allegro CL 64-bit for Linux. The results might be quite different if run on a different platform. Further, optimization settings during compilation dramatically affect the results. You should not expect to see the results included here if you try the examples on your system.
The first example uses the time runtime analyzer. We will profile the following functions, which we define and compile.
(defun fact (x) (if (= x 1) 1 (* x (fact (- x 1))))) (defun bar (x) (dotimes (i 2000) (make-array x))) (defun foo (x) (fact x) (bar x)) (compile 'fact) (compile 'foo) (compile 'bar)
First we collect the data by evaluating the following form calling with-profiling:
(prof:with-profiling (:type :time) (foo 20000))
We display the flat analysis by evaluating the following form calling show-flat-profile:
(prof:show-flat-profile)
That causes the following to be printed (the results you see may be different, of course):
Sampling stopped after 94 samples taken for the time profiler. Sample represents .9 seconds of user+system CPU time (out of a total of 1.0). Times below 1.0% will be suppressed. % % self total self total Function Time Cum. secs secs calls ms/call ms/call name 47.9 47.9 0.4 0.4 "sigprocmask" 37.2 85.1 0.3 0.8 "bignum_multiply" 12.8 97.9 0.1 0.1 "new_simple_vector" 1.1 98.9 0.0 0.0 "block_asynch_signals" 1.1 100.0 0.0 0.0 "mode_line_gc_end"
The first question you might have is where are fact,
foo, and bar?
(fact, foo, and bar
are the functions we defined at the beginning of the example.) A flat
analysis displays entries ordered by the (estimated) time a function
is at the top of the stack, not on the stack. The three functions we
defined call the low-level functions listed to do their work and so
themselves spend little time at the top of the stack, so little that
it falls below the default value of
*significance-threshold*
(0.01, equivalent
to 1%) for fact and foo. If you
printed this display with the value of *significance-threshold*
0.0, you would
see fact and foo (and quite a few
more functions) listed.
We can see from the output that most of the time is spent in the Lisp runtime-system functions bignum_multiply and make_svector_common. (We know they are runtime-system functions because they are named by strings.)
Now let us look at the call graph. We display it by evaluating the form calling show-call-graph:
(prof:show-call-graph)
Here is the call graph that is printed (the results you see may be different, of course):
Time profile of sampled pc values by function, children, and parents. Total times below 1.0% will be suppressed. Parent and child times less 2.0% of the node will be suppressed. Sampling stopped after 94 samples taken for the time profiler. Sample represents .9 seconds of user+system CPU time (out of a total of 1.0). % % Parent self total total local Function Time Time secs % Child 0.0 100.0 0.9 0.0 ==> ... "start" 0.9 100.0 ... "start_reborn_lisp" ----------------------------------------------------- 0.9 100.0 ... "start" 0.0 100.0 0.9 0.0 ==> "start_reborn_lisp" 0.9 100.0 "first_lisp_thread" ----------------------------------------------------- 0.9 100.0 "start_reborn_lisp" 0.0 100.0 0.9 0.0 ==> "first_lisp_thread" 0.9 100.0 excl::lisp-thread-start ----------------------------------------------------- 0.9 100.0 "first_lisp_thread" 0.0 100.0 0.9 0.0 ==> excl::lisp-thread-start 0.9 100.0 excl::run-initial-thread ----------------------------------------------------- 0.9 100.0 excl::lisp-thread-start 0.0 100.0 0.9 0.0 ==> ... excl::run-initial-thread 0.9 100.0 ... excl::thread-reset-catcher ----------------------------------------------------- 0.9 100.0 ... excl::run-initial-thread 0.0 100.0 0.9 0.0 ==> excl::thread-reset-catcher 0.9 100.0 excl::thread-bind-and-call ----------------------------------------------------- 0.9 100.0 excl::thread-reset-catcher 0.0 100.0 0.9 0.0 ==> ... excl::thread-bind-and-call 0.9 100.0 ... eval ----------------------------------------------------- 0.9 100.0 ... excl::thread-bind-and-call 0.0 100.0 0.9 0.0 ==> eval 0.9 100.0 (:anonymous-lambda 13) ----------------------------------------------------- 0.9 100.0 eval 0.0 100.0 0.9 0.0 ==> (:anonymous-lambda 13) 0.9 100.0 prof::prof-a-funcall ----------------------------------------------------- 0.9 100.0 (:anonymous-lambda 13) 0.0 100.0 0.9 0.0 ==> prof::prof-a-funcall 0.9 100.0 (:internal (:anonymous-lambda 13) 0) ----------------------------------------------------- 0.9 100.0 prof::prof-a-funcall 0.0 100.0 0.9 0.0 ==> (:internal (:anonymous-lambda 13) 0) 0.9 100.0 foo ----------------------------------------------------- 0.9 100.0 (:internal (:anonymous-lambda 13) 0) 0.0 100.0 0.9 0.0 ==> foo 0.8 84.0 fact 0.2 16.0 bar ----------------------------------------------------- 0.8 100.0r fact 0.8 100.0r foo 0.0 84.0 0.8 0.0 ==> fact 0.8 100.0r "integer_multiply" 0.8 100.0r fact ----------------------------------------------------- 0.8 100.0 fact 0.0 84.0 0.8 0.0 ==> "integer_multiply" 0.8 100.0 "bignum_multiply" ----------------------------------------------------- 0.8 100.0 "integer_multiply" 37.2 84.0 0.8 44.3 ==> "bignum_multiply" 0.4 54.4 "sigprocmask" ----------------------------------------------------- 0.4 95.6 "bignum_multiply" 0.0 4.4 bar 47.9 47.9 0.4 100.0 ==> "sigprocmask" ----------------------------------------------------- 0.2 100.0 foo 0.0 16.0 0.2 0.0 ==> bar 0.1 80.0 "new_simple_vector" 0.0 13.3 "sigprocmask" 0.0 6.7 "mode_line_gc_end" ----------------------------------------------------- 0.1 100.0 bar 12.8 12.8 0.1 100.0 ==> "new_simple_vector" ----------------------------------------------------- 0.0 100.0 bar 1.1 1.1 0.0 100.0 ==> "mode_line_gc_end" ----------------------------------------------------- 0.0 100.0 "bignum_multiply" 1.1 1.1 0.0 100.0 ==> "block_asynch_signals" -----------------------------------------------------
Look at the entry for foo (12th down). It tells us that foo spends much more time in fact (84%) than it does in bar (16%). Looking at the entry for fact, the "r" next to the local percent numbers indicate that this function has been called recursively. (The suspension points `...' that appear in some entries mean that hidden functions are being called.)
To show call-counting, we call for a time analysis again but specify
:count 100 and :count-list (list #'fact #'bar
#'make-array)
, so the call to with-profiling is
(prof:with-profiling (:type :time :count 100 :count-list (list #'fact #'bar #'make-array)) (foo 20000))
Here is the flat analysis produced by show-flat-profile and the printout of show-call-counts (somewhat abbreviated).
cl-user(23): (prof:show-flat-profile) Sampling stopped after 87 samples taken for the time profiler. Sample represents .9 seconds of user+system CPU time. Times below 1.0% will be suppressed. % % self total self total Function Time Cum. secs secs calls ms/call ms/call name 43.7 43.7 0.3 0.3 "sigprocmask" 40.0 83.7 0.3 0.6 "bignum_multiply" 15.0 98.7 0.1 0.1 "new_simple_vector" 1.2 100.0 0.0 0.0 "getrusage" cl-user(18): (prof:show-call-counts) 21,999 #<Function *_2op> 2,000 #<Function calc-array-size> 216 #<Function fspec-candidate-p> 216 #<Function function-cons-name-p> 216 #<Function function-name-p> 216 #<Function instance-read-1> 108 #<Function arglist> 108 #<Function autoloadp> 108 #<Function class-of> 108 #<standard-generic-function aclmop:class-precedence-list> 108 #<Function clean-up-scheduled-finalizations> 108 #<Function closure-has-arglist-p> 108 #<Closure Template Function default-gc-after-hook @ #x1000002de22> 108 #<Function encapsulated-basic-definition> 108 #<Function gc-after> 108 #<Function generic-function-p> 108 #<Function internal-class-of> 108 #<Function length> 108 #<Function pop-finalization> 108 #<Function process-finalizations> 108 #<Function release-scheduled-finalization-list> 108 #<Function resident-function> 108 #<Function try-reserve-scheduled-finalization-list> 108 #<Function typep-class> 2 #<standard-generic-function profiler::actual-profile-type> 2 #<Function structure-ref> 1 #<standard-generic-function profiler::actual-profile-status> 1 #<Function bar> 1 #<Function fact> 1 #<Function foo> 1 #<Function increment-call-counts> 1 #<Function instance-write-1> [...] cl-user(19):
Note that the call counts are by function object, not function name, that the call-counting overhead in this case is not very significant -- see the comparable result without call-counting above -- (but in other cases it might be), and that runtime-system functions (named by strings in the flat analysis) are not counted.
We next show an example of a space analysis. We use the following code, which (1) uses a lot of space and (2) uses different types of space. Consider the following functions:
(defun foo (n) (let ((array (bar n))) (dotimes (i (* n 8000)) (setf (aref array i) (cons nil nil))))) (defun bar (n) (make-array (* n 8000))) (compile 'foo) (compile 'bar)
We assume these functions have been defined and compiled within Lisp. Note that bar creates an array (which will trigger calls to new_other or new_simple_vector) and foo does a lot of consing, which will trigger calls to _page_from_new or lisp_cons.
We call these functions in a with-profiling form:
(prof:with-profiling (:type :space) (foo 1))
Before looking at the results, we set *hidden-functions*
to nil
, so we can see what system functions
are actually allocating the space.
(setf prof:*hidden-functions* nil)
We print the results with show-flat-profile.
(prof:show-flat-profile)
At first, we see foo and bar but no internal functions
(the suspension points `...' mean that functions called by foo and bar
are using the space). Then we set *hidden-space-functions*
to nil
and we see the runtime-system space
functions actually called.
Sampling stopped after 23 samples taken for the space profiler. Sample represents 184.0 Kbytes of space allocated. Mem.s below 1.0% will be suppressed. % % self total self total Function Mem. Cum. Kbyte Kbyte calls by/call by/call name 65.2 65.2 120 184 ... foo 34.8 100.0 64 64 ... bar cl-user(32): (setq prof:*hidden-space-functions* nil) nil cl-user(33): (prof:show-flat-profile) Sampling stopped after 23 samples taken for the space profiler. Sample represents 184.0 Kbytes of space allocated. Mem.s below 1.0% will be suppressed. % % self total self total Function Mem. Cum. Kbyte Kbyte calls by/call by/call name 65.2 65.2 120 120 "lisp_cons" 34.8 100.0 64 64 "new_simple_vector" cl-user(34):
The first thing to notice is that "lisp_cons" and "new_simple_vector" use all the space. This is not surprising since they actually allocate the space. No other functions appear since all the space is allocated by these functions.
Now we look at a call graph by calling show-call-graph.
cl-user(34): (prof:show-call-graph) Space profile of sampled pc values by function, children, and parents. Total mem.s below 1.0% will be suppressed. Parent and child mem.s less 2.0% of the node will be suppressed. Sampling stopped after 23 samples taken for the space profiler. Sample represents 184.0 Kbytes of space allocated. % % Parent self total total local Function Mem. Mem. Kbyte % Child 0.0 100.0 184 0.0 ==> "start" 184 100.0 "cont_setstack" ----------------------------------------------------- 184 100.0 "start" 0.0 100.0 184 0.0 ==> "cont_setstack" 184 100.0 "startup_lisp" ----------------------------------------------------- 184 100.0 "cont_setstack" 0.0 100.0 184 0.0 ==> "startup_lisp" 184 100.0 "start_reborn_lisp" ----------------------------------------------------- 184 100.0 "startup_lisp" 0.0 100.0 184 0.0 ==> "start_reborn_lisp" 184 100.0 "first_lisp_thread" ----------------------------------------------------- 184 100.0 "start_reborn_lisp" 0.0 100.0 184 0.0 ==> "first_lisp_thread" 184 100.0 excl::lisp-thread-start ----------------------------------------------------- 184 100.0 "first_lisp_thread" 0.0 100.0 184 0.0 ==> excl::lisp-thread-start 184 100.0 excl::run-initial-thread ----------------------------------------------------- 184 100.0 excl::lisp-thread-start 0.0 100.0 184 0.0 ==> excl::run-initial-thread 184 100.0 excl::setup-required-thread-bindings ----------------------------------------------------- 184 100.0 excl::run-initial-thread 0.0 100.0 184 0.0 ==> excl::setup-required-thread-bindings 184 100.0 excl::thread-reset-catcher ----------------------------------------------------- 184 100.0 excl::setup-required-thread-bindings 0.0 100.0 184 0.0 ==> excl::thread-reset-catcher 184 100.0 excl::thread-bind-and-call ----------------------------------------------------- 184 100.0 excl::thread-reset-catcher 0.0 100.0 184 0.0 ==> excl::thread-bind-and-call 184 100.0 #<Runtime Function Object (:runsys "lisp_apply")> ----------------------------------------------------- 184 100.0r tpl:start-interactive-top-level 184 100.0r excl::thread-bind-and-call 0.0 100.0 184 0.0 ==> #<Runtime Function Object (:runsys "lisp_apply")> 184 100.0r tpl:top-level-read-eval-print-loop 184 100.0r excl::start-lisp-execution ----------------------------------------------------- 184 100.0 #<Runtime Function Object (:runsys "lisp_apply")> 0.0 100.0 184 0.0 ==> excl::start-lisp-execution 184 100.0 tpl:start-interactive-top-level ----------------------------------------------------- 184 100.0 excl::start-lisp-execution 0.0 100.0 184 0.0 ==> tpl:start-interactive-top-level 184 100.0 #<Runtime Function Object (:runsys "lisp_apply")> ----------------------------------------------------- 184 100.0 #<Runtime Function Object (:runsys "lisp_apply")> 0.0 100.0 184 0.0 ==> tpl:top-level-read-eval-print-loop 184 100.0 tpl::top-level-read-eval-print-loop1 ----------------------------------------------------- 184 100.0 tpl:top-level-read-eval-print-loop 0.0 100.0 184 0.0 ==> tpl::top-level-read-eval-print-loop1 184 100.0 excl::read-eval-print-loop ----------------------------------------------------- 184 100.0 tpl::top-level-read-eval-print-loop1 0.0 100.0 184 0.0 ==> excl::read-eval-print-loop 184 100.0 tpl::read-eval-print-one-command ----------------------------------------------------- 184 100.0 excl::read-eval-print-loop 0.0 100.0 184 0.0 ==> tpl::read-eval-print-one-command 184 100.0 eval ----------------------------------------------------- 184 100.0 tpl::read-eval-print-one-command 0.0 100.0 184 0.0 ==> eval 184 100.0 (:anonymous-lambda 15) ----------------------------------------------------- 184 100.0 eval 0.0 100.0 184 0.0 ==> (:anonymous-lambda 15) 184 100.0 prof::prof-a-funcall ----------------------------------------------------- 184 100.0 (:anonymous-lambda 15) 0.0 100.0 184 0.0 ==> prof::prof-a-funcall 184 100.0 (:internal (:anonymous-lambda 15) 0) ----------------------------------------------------- 184 100.0 prof::prof-a-funcall 0.0 100.0 184 0.0 ==> (:internal (:anonymous-lambda 15) 0) 184 100.0 foo ----------------------------------------------------- 184 100.0 (:internal (:anonymous-lambda 15) 0) 0.0 100.0 184 0.0 ==> foo 120 65.2 "lisp_cons" 64 34.8 bar ----------------------------------------------------- 120 100.0 foo 65.2 65.2 120 100.0 ==> "lisp_cons" ----------------------------------------------------- 64 100.0 foo 0.0 34.8 64 0.0 ==> bar 64 100.0 "new_simple_vector" ----------------------------------------------------- 64 100.0 bar 34.8 34.8 64 100.0 ==> "new_simple_vector" ----------------------------------------------------- cl-user(35):
Copyright (c) 1998-2019, Franz Inc. Oakland, CA., USA. All rights reserved.
This page was not revised from the 8.2 page.
Created 2012.5.30.
| Allegro CL version 9.0 Unrevised from 8.2 to 9.0. 8.2 version |