Discussion:
[gambit-list] Re --enable-auto-forcing: Scope, how/where implemented, how build (with) it properly?
Adam
2017-06-11 09:08:27 UTC
Permalink
Hi Marc!

Three questions regarding promise auto-forcing:


1) Scope
If build correctly (e.g. per the instructions below), every single
Scheme-world evaluation in the whole GVM will be included in the auto
forcing scheme, right?

E.g. (begin (declare (not safe)) (print (##string-append (delay "Hello
world")) "\n")) and any other quirky Scheme code cases are also included.

Do any declares affect auto-forcing behavior?


2) How/where is it implemented?
I'm trying to follow the code path of the "--enable-auto-forcing"
./configure argument, but I cannot figure out how it is propagated as to
affect anything in include/ , gsc/ , lib/ , anywhere.

Also I can't find any logics in gsc/ , lib/ or include/ (that's where they
should be I supposed) that pertain to automatic forcing of promises.

Possibly the "macro-force-vars", which is used all over the runtime and
compiler, would have something to do with this, but I don't find its
definition anywhere.

What am I missing, would you like to take a minute to describe where and
how the auto-forcing logics are implemented?


3) How build Gambit with it enabled properly?
Also, enabling auto-force requires a full recompile of Gambit's own sources
(for *all* C files involved to be recompiled, including the runtime, e.g.
including the repl, will auto-force, and not just user code and hence just
a tiny part of the logics, which would lead to a totally uneven application
of auto-forcing), right, so the following is how to properly switch it on
right?:


git clone https://github.com/gambit/gambit.git
cd gambit
./configure --enable-auto-force
make -j4
mv gsc/gsc gsc-boot
make bootclean
make -j4
sudo make install

Or do you suggest any other sequence or way? Should I use "from-scratch"
instead of "make bootclean" + "make"?


Thanks!
Bradley Lucier
2017-06-12 22:30:10 UTC
Permalink
Possibly the "macro-force-vars", which is used all over the runtime and compiler, would have something to do with this, but I don't find its definition anywhere.
In _gambit#.scm:

(macro-define-syntax macro-force-vars
(lambda (stx)
(syntax-case stx ()
((_ vars expr)
(if (let* ((co
(##global-var-ref
(##make-global-var '##compilation-options)))
(comp-opts
(if (##unbound? co) '() co)))
(assq 'force comp-opts))

(syntax-case (datum->syntax
#'vars
(map (lambda (x) `(,x (##force ,x)))
(syntax->list #'vars)))
()
(bindings #'(let bindings expr)))

#'expr)))))
Adam
2017-06-16 03:04:43 UTC
Permalink
Post by Adam
Post by Adam
Possibly the "macro-force-vars", which is used all over the runtime and
compiler, would have something to do with this, but I don't find its
definition anywhere.
(macro-define-syntax macro-force-vars
(lambda (stx)
(syntax-case stx ()
((_ vars expr)
(if (let* ((co
(##global-var-ref
(##make-global-var '##compilation-options)))
(comp-opts
(if (##unbound? co) '() co)))
(assq 'force comp-opts))
(syntax-case (datum->syntax
#'vars
(map (lambda (x) `(,x (##force ,x)))
(syntax->list #'vars)))
()
(bindings #'(let bindings expr)))
#'expr)))))
Ah right,
https://github.com/gambit/gambit/blob/9c3dcbdc322a10673370c0880696ba131144251d/lib/_gambit%23.scm#L316
, and used to be a define-macro,
https://github.com/gambit/gambit/blob/29103e6a29b8fbbf7d6fc772a344b814be3f1c1a/lib/_gambit%23.scm#L492
, and all the rest of the code is meticulously padded with its use.


This also sheds a bit of light on why the slot containing the promise is
not replaced with the forced value. Maybe that would be possible in some
situations though.. When |x| is a symbol, it could be |set!| with the
forced value?

That would cover standard variable slots and not typedef, vector, pair etc.
slots though, I guess I'd need to dig in a bit more to understand how this
one actually works out. If you have any spontaneous ideas, feel free to
share.


Any idea where in the sources fundamental primitives like |+| , |if| , |or|
autoforce?
Marc Feeley
2017-06-16 05:52:13 UTC
Permalink
Here is the definition of + from lib/_num.scm:

(define-prim-nary (+ x y)
0
(if (##number? x) x '(1))
(##+ x y)
macro-force-vars
macro-no-check
(##pair? ##fail-check-number))

The define-prim-nary macro will expand this to an n-ary procedure definition where the 0 argument case returns 0, the 1 argument case returns the argument if it is a number otherwise it raises a type error (by calling ##fail-check-number), and the general >= 2 argument case calls ##+ to fold the argument list. All arguments are passed to macro-force-vars to force the argument if it is a promise (and --enable-auto-forcing is used).

Using set! to “short-circuit” promises is not a good idea because it introduces a cell for the variable (if local and not previously assigned) and this slows things down. In early versions of Gambit (on Motorola 68K), the garbage collector did this short-circuiting (i.e. a reference to a promise was replaced with the value of the promise if it was previously forced). This isn’t done currently but probably easy to add.

Marc
Post by Bradley Lucier
Possibly the "macro-force-vars", which is used all over the runtime and compiler, would have something to do with this, but I don't find its definition anywhere.
(macro-define-syntax macro-force-vars
(lambda (stx)
(syntax-case stx ()
((_ vars expr)
(if (let* ((co
(##global-var-ref
(##make-global-var '##compilation-options)))
(comp-opts
(if (##unbound? co) '() co)))
(assq 'force comp-opts))
(syntax-case (datum->syntax
#'vars
(map (lambda (x) `(,x (##force ,x)))
(syntax->list #'vars)))
()
(bindings #'(let bindings expr)))
#'expr)))))
Ah right, https://github.com/gambit/gambit/blob/9c3dcbdc322a10673370c0880696ba131144251d/lib/_gambit%23.scm#L316 , and used to be a define-macro, https://github.com/gambit/gambit/blob/29103e6a29b8fbbf7d6fc772a344b814be3f1c1a/lib/_gambit%23.scm#L492 , and all the rest of the code is meticulously padded with its use.
This also sheds a bit of light on why the slot containing the promise is not replaced with the forced value. Maybe that would be possible in some situations though.. When |x| is a symbol, it could be |set!| with the forced value?
That would cover standard variable slots and not typedef, vector, pair etc. slots though, I guess I'd need to dig in a bit more to understand how this one actually works out. If you have any spontaneous ideas, feel free to share.
Any idea where in the sources fundamental primitives like |+| , |if| , |or| autoforce?
Marc Feeley
2017-08-15 13:53:13 UTC
Permalink
I'll make closer benchmarking of auto forcing later, but, it incurs a quite steep overhead.
Experimental data please!
With --enable-auto-forcing, at procedure calls, that is (procedure 'arg1 'arg2 'etc.), |procedure| is forced, as we can see by the example that in both interpreted and compiled mode, ((delay (begin (println "Forced.") (lambda (v) v))) #t) evaluates (and it prints out "Forced.").
I'd wildly guess that quite a lot of the overhead in auto-forcing is here. If you're sure the operator will never be a promise in any procedure call, then this particular aspect of auto forcing could be disabled.
(Without checking, I would guess that while Gambit may optimize so that local procedure calls may avoid being brought through the |force| logics, then, this optimization would do so that calls to procedures defined in *other* modules such as the runtime, would *not* be taken through |force| also. That should be positive for speed.)
Would it be possible for me to disable this particular aspect of the auto-forcing, to get higher performance?
Currently auto-forcing only works in interpreted code. So if your program does (f x) and (car y) and you compile that, then “f” will not be forced and “y” will not be forced if car is inlined, for example if you (declare (not safe)). You can consider this a bug… to be fixed.

However, compiled predefined functions do the auto-forcing (when the system is configured with --enable-auto-forcing). So (car y) will force y if the actual function is called, for example if you (declare (safe)), which is the default.

Note that for “safe” compiled code, when compiling (f x) where f is a mutable global variable, it is necessary to generate a check that f is a procedure. If it is not a procedure a handler is called that normally raises an exception. This handler could easily be extended to first force the value and check if the resulting value is a procedure and proceed with the call if it is (hooray for raising exceptions in tail-position!). So there would be no additional cost for safe compiled code.

Marc
Adam
2017-09-08 11:30:42 UTC
Permalink
Dear Marc,

Thank you very much for your clarification. Followup question below, to
really understand what you are saying.
I'll make closer benchmarking of auto forcing later, but, it incurs a
quite steep overhead.
Experimental data please!
I will provide it.
With --enable-auto-forcing, at procedure calls, that is (procedure 'arg1
'arg2 'etc.), |procedure| is forced, as we can see by the example that in
both interpreted and compiled mode, ((delay (begin (println "Forced.")
(lambda (v) v))) #t) evaluates (and it prints out "Forced.").
I'd wildly guess that quite a lot of the overhead in auto-forcing is
here. If you're sure the operator will never be a promise in any procedure
call, then this particular aspect of auto forcing could be disabled.
(Without checking, I would guess that while Gambit may optimize so that
local procedure calls may avoid being brought through the |force| logics,
then, this optimization would do so that calls to procedures defined in
*other* modules such as the runtime, would *not* be taken through |force|
also. That should be positive for speed.)
Would it be possible for me to disable this particular aspect of the
auto-forcing, to get higher performance?
Currently auto-forcing only works in interpreted code. So if your program
does (f x) and (car y) and you compile that, then “f” will not be forced
and “y” will not be forced if car is inlined, for example if you (declare
(not safe)). You can consider this a bug
 to be fixed.
However, compiled predefined functions do the auto-forcing (when the
system is configured with --enable-auto-forcing). So (car y) will force y
if the actual function is called, for example if you (declare (safe)),
which is the default.
Note that for “safe” compiled code, when compiling (f x) where f is a
mutable global variable, it is necessary to generate a check that f is a
procedure. If it is not a procedure a handler is called that normally
raises an exception. This handler could easily be extended to first force
the value and check if the resulting value is a procedure and proceed with
the call if it is (hooray for raising exceptions in tail-position!). So
there would be no additional cost for safe compiled code.
Can you please clarify what you mean here, by giving one or two pieces of
example code, that illustrate the difference between various code operation
options - so.. that would be

1) Interpreted code, vs.
2) Compiled code with (declare (safe)), vs.
3) Compiled code with (declare (not safe))

(I guess maybe (declare (block)) vs. (declare (separate)) could affect
forcing behavior, as inlined code not would be forced, but non-inlined code
could.)



In either case if I understand you right, there are examples where Gambit
with auto-forcing enabled, will fail executing ((delay (lambda ()
'hello-world))) .

Also.. if I understand you right, there are cases when Gambit with
auto-forcing enabled also would fail evaluating (abs (delay 0)) .

Only for my clarity, please tell in what interval of conditions these will
fail.


Thanks!
Marc Feeley
2017-09-18 15:55:38 UTC
Permalink
Post by Adam
Dear Marc,
Thank you very much for your clarification. Followup question below, to really understand what you are saying.
I'll make closer benchmarking of auto forcing later, but, it incurs a quite steep overhead.
Experimental data please!
I will provide it.
With --enable-auto-forcing, at procedure calls, that is (procedure 'arg1 'arg2 'etc.), |procedure| is forced, as we can see by the example that in both interpreted and compiled mode, ((delay (begin (println "Forced.") (lambda (v) v))) #t) evaluates (and it prints out "Forced.").
I'd wildly guess that quite a lot of the overhead in auto-forcing is here. If you're sure the operator will never be a promise in any procedure call, then this particular aspect of auto forcing could be disabled.
(Without checking, I would guess that while Gambit may optimize so that local procedure calls may avoid being brought through the |force| logics, then, this optimization would do so that calls to procedures defined in *other* modules such as the runtime, would *not* be taken through |force| also. That should be positive for speed.)
Would it be possible for me to disable this particular aspect of the auto-forcing, to get higher performance?
Currently auto-forcing only works in interpreted code. So if your program does (f x) and (car y) and you compile that, then “f” will not be forced and “y” will not be forced if car is inlined, for example if you (declare (not safe)). You can consider this a bug… to be fixed.
Actually this is not quite true… f will be forced in the call (f x) thanks to this definition in lib/_kernel.scm:

(define-prim (##apply-with-procedure-check oper args)
(##declare (not interrupts-enabled))
(macro-force-vars (oper)
(if (##procedure? oper)
(##apply oper args)
(##raise-nonprocedure-operator-exception oper args #f #f))))
Post by Adam
However, compiled predefined functions do the auto-forcing (when the system is configured with --enable-auto-forcing). So (car y) will force y if the actual function is called, for example if you (declare (safe)), which is the default.
Note that for “safe” compiled code, when compiling (f x) where f is a mutable global variable, it is necessary to generate a check that f is a procedure. If it is not a procedure a handler is called that normally raises an exception. This handler could easily be extended to first force the value and check if the resulting value is a procedure and proceed with the call if it is (hooray for raising exceptions in tail-position!). So there would be no additional cost for safe compiled code.
Can you please clarify what you mean here, by giving one or two pieces of example code, that illustrate the difference between various code operation options - so.. that would be
1) Interpreted code, vs.
2) Compiled code with (declare (safe)), vs.
3) Compiled code with (declare (not safe))
(I guess maybe (declare (block)) vs. (declare (separate)) could affect forcing behavior, as inlined code not would be forced, but non-inlined code could.)
Take this code, an explicit definition of a simple map function:

(define (mymap f lst)
(if (pair? lst)
(let ((elem (car lst)))
(cons (f elem)
(mymap f (cdr lst))))
'()))

(pp (mymap (delay (lambda (x) (+ x 1))) '(10 20 30)))

Note that the first parameter to mymap is a promise whose forced value is a function.

In a system built with --enable-auto-forcing, the code runs fine in the interpreter and when compiled.

However, when (declare (not safe)) is used the primitive operations (car, cdr, …) will not automatically force the arguments, and the call operation will not automatically force the “operator” position, i.e. the function being called. So this code doesn’t work as expected when compiled:

(define (mymap f lst)
(if (pair? lst)
(let ((elem (car lst)))
(cons (let () (declare (not safe)) (f elem))
(mymap f (cdr lst))))
'()))

(pp (mymap (delay (lambda (x) (+ x 1))) '(10 20 30)))

The compiler converts (car lst) into (##car lst) instead of the correct (##car (##force lst)). Also, when using the default (declare (safe)) the compiler converts (car lst) into

(if (##pair? lst) (##car lst) (car lst))

rather than the correct

(let ((lst (##force lst)))
(if (##pair? lst) (##car lst) (car lst)))
Post by Adam
In either case if I understand you right, there are examples where Gambit with auto-forcing enabled, will fail executing ((delay (lambda () 'hello-world))) .
With --enable-auto-forcing, this case works fine with (declare (safe)), but will not work fine with (declare (not safe)).
Post by Adam
Also.. if I understand you right, there are cases when Gambit with auto-forcing enabled also would fail evaluating (abs (delay 0)) .
With --enable-auto-forcing, this case works only when abs is not inlined, in other words an actual function call to abs is performed (because the library definition correctly forces the argument). This can be achieved in various ways, like (declare (not run-time-bindings abs)) or (declare (standard-bindings) (not inline-primitives abs)).

This needs to be fixed so that those declarations, which typically improve execution speed, can be used reliably in a system built with --enable-auto-forcing.
Post by Adam
Only for my clarity, please tell in what interval of conditions these will fail.
Thanks!
Marc
Adam
2017-09-18 19:07:15 UTC
Permalink
Hi Marc,

Thank you very much for clarifying.

Three brief followup questions at the bottom (marked 1. 2. 3.).

2017-09-18 23:55 GMT+08:00 Marc Feeley <***@iro.umontreal.ca>:
[..]
Post by Adam
Post by Adam
Also.. if I understand you right, there are cases when Gambit with
auto-forcing enabled also would fail evaluating (abs (delay 0)) .
With --enable-auto-forcing, this case works only when abs is not inlined,
in other words an actual function call to abs is performed (because the
library definition correctly forces the argument). This can be achieved in
various ways, like (declare (not run-time-bindings abs)) or (declare
(standard-bindings) (not inline-primitives abs)).
This needs to be fixed so that those declarations, which typically improve
execution speed, can be used reliably in a system built with
--enable-auto-forcing.
Ahaa, so that is a limit that exists currently for --enable-auto-forcing .
Thanks for pointing out!

[..]
Post by Adam
Also, when using the default (declare (safe)) the compiler converts (car
lst) into
(if (##pair? lst) (##car lst) (car lst))
rather than the correct
(let ((lst (##force lst)))
(if (##pair? lst) (##car lst) (car lst)))
This was just another iteration of what you said already in the section
above right?

[..]
Post by Adam
Post by Adam
Would it be possible for me to disable this particular aspect of the
auto-forcing, to get higher performance?
Post by Adam
Currently auto-forcing only works in interpreted code. So if your
program does (f x) and (car y) and you compile that, then “f” will not be
forced and “y” will not be forced if car is inlined, for example if you
(declare (not safe)). You can consider this a bug
 to be fixed.
Actually this is not quite true
 f will be forced in the call (f x)
[also in compiled mode - 3. I interpret you to mean that here, that is
correct, right?]
Post by Adam
(define-prim (##apply-with-procedure-check oper args)
(##declare (not interrupts-enabled))
(macro-force-vars (oper)
(if (##procedure? oper)
(##apply oper args)
(##raise-nonprocedure-operator-exception oper args #f #f))))
Wait, what does |##apply-with-procedure-check| actually do, in what
situations is it invoked, is this run on all (f a1 a2 ...) with oper = f
and args = (list a1 a2 ...) for any procedure call made anywhere, when
compiling with (declare (safe))?


1. So if I just remove the |macro-force-vars| in there, |f| will not be
forced in compiled mode?


2. Just if it is possible, is there some easy way to make also |f| *not* be
forced in compiled mode?


Not forcing |f| ever, would be useful in situations where you use the
auto-forcing only to force data structures but never any code.

I hypothesize that this will provide significant speed increases.

Will test and benchmark following your next clarification.


Thanks a lot!
Marc Feeley
2017-09-18 19:28:51 UTC
Permalink
Post by Adam
Hi Marc,
Thank you very much for clarifying.
Three brief followup questions at the bottom (marked 1. 2. 3.).
[..]
Post by Adam
Also.. if I understand you right, there are cases when Gambit with auto-forcing enabled also would fail evaluating (abs (delay 0)) .
With --enable-auto-forcing, this case works only when abs is not inlined, in other words an actual function call to abs is performed (because the library definition correctly forces the argument). This can be achieved in various ways, like (declare (not run-time-bindings abs)) or (declare (standard-bindings) (not inline-primitives abs)).
This needs to be fixed so that those declarations, which typically improve execution speed, can be used reliably in a system built with --enable-auto-forcing.
Ahaa, so that is a limit that exists currently for --enable-auto-forcing . Thanks for pointing out!
[..]
Also, when using the default (declare (safe)) the compiler converts (car lst) into
(if (##pair? lst) (##car lst) (car lst))
rather than the correct
(let ((lst (##force lst)))
(if (##pair? lst) (##car lst) (car lst)))
This was just another iteration of what you said already in the section above right?
I guess…
Post by Adam
[..]
Post by Adam
Would it be possible for me to disable this particular aspect of the auto-forcing, to get higher performance?
Currently auto-forcing only works in interpreted code. So if your program does (f x) and (car y) and you compile that, then “f” will not be forced and “y” will not be forced if car is inlined, for example if you (declare (not safe)). You can consider this a bug… to be fixed.
Actually this is not quite true… f will be forced in the call (f x)
[also in compiled mode - 3. I interpret you to mean that here, that is correct, right?]
Yes in compiled mode.
Post by Adam
(define-prim (##apply-with-procedure-check oper args)
(##declare (not interrupts-enabled))
(macro-force-vars (oper)
(if (##procedure? oper)
(##apply oper args)
(##raise-nonprocedure-operator-exception oper args #f #f))))
Wait, what does |##apply-with-procedure-check| actually do, in what situations is it invoked, is this run on all (f a1 a2 ...) with oper = f and args = (list a1 a2 ...) for any procedure call made anywhere, when compiling with (declare (safe))?
No… In the scope of a (declare (safe)) the generated C code will check if the “operator” position, the f here, is a procedure. A direct transfer of control to f is done when f is a procedure. The function ##apply-with-procedure-check is tail called by the runtime system when (##procedure? oper) is #f. Note that it could be that oper is a promise whose forced value is a procedure, so ##apply-with-procedure-check forces oper and checks if the forced value is a procedure (assuming the runtime system was compiled with --enable-auto-forcing)
Post by Adam
1. So if I just remove the |macro-force-vars| in there, |f| will not be forced in compiled mode?
Yes.
Post by Adam
2. Just if it is possible, is there some easy way to make also |f| *not* be forced in compiled mode?
Just don’t use --enable-auto-forcing… or use (declare (not safe)) so that the runtime system doesn’t check that f is a procedure. Note that with (declare (safe)) in the case of the operator position of a call the auto-forcing doesn’t add any overhead because the common case is that the operator position is a procedure.
Post by Adam
Not forcing |f| ever, would be useful in situations where you use the auto-forcing only to force data structures but never any code.
I hypothesize that this will provide significant speed increases.
No… see previous comment. There is zero cost for auto-forcing the operator position in safe mode.
Post by Adam
Will test and benchmark following your next clarification.
Thanks a lot!
I don’t understand why you are so concerned with this issue (forcing the operator position of a call)… The real overhead is auto-forcing data-structures… A good approach to minimize the overhead is a dataflow analysis or even BBV…

Marc
Adam
2017-09-18 21:17:38 UTC
Permalink
Dear Marc,

Thanks for all your clarifications.

I think I gathered from you that actually auto-forcing fundamentally is a
very expensive problem to solve and that for this reason for me to solve my
particular problem, I should minimize the amount of Gambit primitives that
need to auto-force, to a minimum.

And indeed yes all with you that if such a simple reduction of the problem
would not be possible, then dataflow analysis would be a fit.


For my final clarity on the implications of this problem, below I'd like to
ask you additionally briefly about how forcing and the auto-forcing
transformation actually work, and also check if forcing via protected
virtual memory could be any good idea ever -


2017-09-19 3:28 GMT+08:00 Marc Feeley <***@iro.umontreal.ca>:
[..]
Post by Adam
Post by Adam
Wait, what does |##apply-with-procedure-check| actually do, in what
situations is it invoked, is this run on all (f a1 a2 ...) with oper = f
and args = (list a1 a2 ...) for any procedure call made anywhere, when
compiling with (declare (safe))?
No
 In the scope of a (declare (safe)) the generated C code will check if
the “operator” position, the f here, is a procedure. A direct transfer of
control to f is done when f is a procedure. The function
##apply-with-procedure-check is tail called by the runtime system when
(##procedure? oper) is #f.
Ah I understand - right, so auto-forcing has zero overhead for operators in
procedure calls. Great!
Post by Adam
Post by Adam
Will test and benchmark following your next clarification.
Thanks a lot!
I don’t understand why you are so concerned with this issue (forcing the
operator position of a call)
 The real overhead is auto-forcing
data-structures
 A good approach to minimize the overhead is a dataflow
analysis or even BBV

I made a preliminary test of --enable-auto-forcing's overhead and it
suggested that --enable-auto-forcing out of the box incurs something like
400% overhead, on digest.scm , which is indeed a quite unfair example.

While not a pedantic approach on my behalf, I went into asking the
question, how make auto-forcing faster, and came up with the idea that
removing operator forcing could help speed things up.

Now you clarified that operator forcing actually has zero overhead - thanks.


I need to verify the 400% overhead figure, but, where is most of the
overhead from auto-forcing caused?


Is it that (##force), which is done by macro-force-vars on every single
value in the system, at every evaluation point (
https://github.com/gambit/gambit/blob/29103e6a29b8fbbf7d6fc772a344b814be3f1c1a/lib/_gambit%23.scm#L492),
has an inherent overhead in that an extra variable slot is added, and a
type check, a comparison, and a conditional jump?


I can't find ##force's code anywhere, so it appears to me that it's a
product of the compiler and is inlined. I presume ##force's pseudocode
would look something like

(define-prim (##force value)
(if (##promise? value)

(begin

(if (not (##promise-value-slot-set? value))
(##promise-value-slot-set! value (##promise-thunk-for-promise-code
value)))

(##promise-value-slot value))

value))

, and its application in auto-forcing is a transformation something like

(define (language-primitive op .. arg .. ) ..logics..)

to

(define (language-primitive op .. arg .. ) (let ((arg (##force arg)))
..logics..))

.

I guess in this light, this alternative transformation would not be of any
particular use:

(define (language-primitive op .. arg .. ) (set! arg (##force arg))
..logics..)


An exotic idea would be to use protected virtual memory like described here
https://medium.com/@MartinCracauer/generational-garbage-collection-write-barriers-write-protection-and-userfaultfd-2-8b0e796b8f7f
. I guess probably this would not work out at all, but I would like to ask
you about it briefly anyhow -

For it to be useful globally, Brooks/forwarding pointers would need to be
enabled in Gambit (so normally the forwarding pointer would be a
self-reference, whereas for promises they would be located in the protected
memory, which would spark a SIGSEGV, used as a trigger to run the promise
code and update pointers).

For a limited use situation where the promise code's result type and size
are pre-known, virtual memory for the promise value could be pre-allocated,
and the SIGSEGV handler would function to spark the evaluation which would
lead to filling out those protected memory addresses with real values.

However - I doubt that the SIGSEGV handler could be made easily to interact
with the Scheme world though!

That is, the SIGSEGV handler would cause a trampoline jump to the promise
code, and, at completion of the promise code, storing away its result and
continuing at the Scheme code location that trigged the SIGSEGV.

I guess this would be difficult or impossible, because we don't know
exactly what location in the C code trigged the SIGSEGV, and so the GVM
would maybe not have operational integrity so that a trampoline in the
Scheme world could take place -

I guess this is so far-out that it should not even be considered, what do
you say?



So then, right, code analysis would help.

Also maybe the most simple speedup would be gained from reducing the
forcing to need to take place only at a very limited subset of primitives.


Thanks,
Adam

Bradley Lucier
2017-06-14 18:28:32 UTC
Permalink
Post by Adam
git clone https://github.com/gambit/gambit.git
cd gambit
./configure --enable-auto-force
make -j4
mv gsc/gsc gsc-boot
make bootclean
make -j4
sudo make install
Or do you suggest any other sequence or way? Should I use "from-scratch"
instead of "make bootclean" + "make"?
This appears to build an executable:

29 18:34 git clone https://github.com/gambit/gambit.git
30 18:34 cd gambit
31 18:34 ./configure
32 18:34 make -j4 current-gsc-boot
35 18:37 ./configure --enable-single-host --enable-auto-forcing
36 18:37 make -j 8 from-scratch

It fails Test 1 of "make check", but I don't know whether that's relevant.

Brad
Adam
2017-06-16 02:49:59 UTC
Permalink
Post by Adam
git clone https://github.com/gambit/gambit.git
cd gambit
./configure --enable-auto-force
make -j4
mv gsc/gsc gsc-boot
make bootclean
make -j4
sudo make install
Or do you suggest any other sequence or way? Should I use "from-scratch"
instead of "make bootclean" + "make"?
29 18:34 git clone https://github.com/gambit/gambit.git
30 18:34 cd gambit
31 18:34 ./configure
32 18:34 make -j4 current-gsc-boot
35 18:37 ./configure --enable-single-host --enable-auto-forcing
36 18:37 make -j 8 from-scratch
Aha - "from-scrach" is likely the recommended way of doing things then,
Marc?
It fails Test 1 of "make check", but I don't know whether that's relevant.
What failure message do you get?
Adam
2017-07-14 08:56:37 UTC
Permalink
Post by Adam
git clone https://github.com/gambit/gambit.git
cd gambit
./configure --enable-auto-force
make -j4
mv gsc/gsc gsc-boot
make bootclean
make -j4
sudo make install
Or do you suggest any other sequence or way? Should I use "from-scratch"
instead of "make bootclean" + "make"?
29 18:34 git clone https://github.com/gambit/gambit.git
30 18:34 cd gambit
31 18:34 ./configure
32 18:34 make -j4 current-gsc-boot
35 18:37 ./configure --enable-single-host --enable-auto-forcing
36 18:37 make -j 8 from-scratch
It fails Test 1 of "make check", but I don't know whether that's relevant.
Brad
Brad,

What you suggested now would be how to build the current beta SMP Gambit,
currently during its period of lots of deep changes, which is a transitory
period, right? -


Let's nail how to do it in the 'ordinary' case too, so that would be in a
while from now, and, for older Gambit versions. Would it be like this?:

./configure --enable-auto-forcing
make from-scratch
cp gsc/gsc ./gsc-boot
make clean
make
sudo make install
Bradley Lucier
2017-07-14 16:41:20 UTC
Permalink
Post by Adam
git clone https://github.com/gambit/gambit.git
<https://secure-web.cisco.com/1homFLxKQnlcP85KH6ygKE6lEkPYiKxcHhttZ-GD8a4kbxAqz92WpuVjIm6ikIv6UPxFW5clU4WZjJdIxlmbhNR0baXZR0vJAt4Z8JLeUlHXGKOILBkDBSjGt317Y2kqUNI7KeGx154F3IhNZrTta_58NQ7wxSItoMSjKHkHKlzC2sd0pYjzv8MoquuM7aX_GqZEHOEIIyTBULxwYRCYr_6tRG4ok7DUpsX528uh7N99_N0P2QJpcQjApry7sLOhOR_iW1PDDl7QTwMNdheuaXLqcqElrl_n1Lmdd70nBf4vusB_9p799fwCttJElIskMR2HhrFIk1vCWmhwsXBHyStv75-j7cpQVkpJ9X16ElNg/https%3A%2F%2Fgithub.com%2Fgambit%2Fgambit.git>
cd gambit
./configure --enable-auto-force
make -j4
mv gsc/gsc gsc-boot
make bootclean
make -j4
sudo make install
Or do you suggest any other sequence or way? Should I use
"from-scratch" instead of "make bootclean" + "make"?
29 18:34 git clone https://github.com/gambit/gambit.git
<https://secure-web.cisco.com/1homFLxKQnlcP85KH6ygKE6lEkPYiKxcHhttZ-GD8a4kbxAqz92WpuVjIm6ikIv6UPxFW5clU4WZjJdIxlmbhNR0baXZR0vJAt4Z8JLeUlHXGKOILBkDBSjGt317Y2kqUNI7KeGx154F3IhNZrTta_58NQ7wxSItoMSjKHkHKlzC2sd0pYjzv8MoquuM7aX_GqZEHOEIIyTBULxwYRCYr_6tRG4ok7DUpsX528uh7N99_N0P2QJpcQjApry7sLOhOR_iW1PDDl7QTwMNdheuaXLqcqElrl_n1Lmdd70nBf4vusB_9p799fwCttJElIskMR2HhrFIk1vCWmhwsXBHyStv75-j7cpQVkpJ9X16ElNg/https%3A%2F%2Fgithub.com%2Fgambit%2Fgambit.git>
30 18:34 cd gambit
31 18:34 ./configure
32 18:34 make -j4 current-gsc-boot
35 18:37 ./configure --enable-single-host --enable-auto-forcing
36 18:37 make -j 8 from-scratch
It fails Test 1 of "make check", but I don't know whether that's relevant.
Brad
Brad,
What you suggested now would be how to build the current beta SMP
Gambit, currently during its period of lots of deep changes, which is a
transitory period, right? -
I don't understand this, sorry.
Post by Adam
Let's nail how to do it in the 'ordinary' case too, so that would be in
./configure --enable-auto-forcing
make from-scratch
cp gsc/gsc ./gsc-boot
make clean
make
sudo make install
Again, I'm a bit confused. I recommend the sequence of commands I
already gave to build with --enable-auto-forcing:

git clone https://github.com/gambit/gambit.git
cd gambit
./configure
make -j 4 current-gsc-boot
./configure --enable-single-host --enable-auto-forcing
make -j 4 from-scratch
make -j 4 doc
sudo make install
Adam
2017-07-25 08:57:35 UTC
Permalink
Hi Brad,
Post by Bradley Lucier
Post by Adam
git clone https://github.com/gambit/gambit.git
<https://secure-web.cisco.com/1homFLxKQnlcP85KH6ygKE6lEkPYiK
xcHhttZ-GD8a4kbxAqz92WpuVjIm6ikIv6UPxFW5clU4WZjJdIxlmbhNR0ba
XZR0vJAt4Z8JLeUlHXGKOILBkDBSjGt317Y2kqUNI7KeGx154F3IhNZrTta_
58NQ7wxSItoMSjKHkHKlzC2sd0pYjzv8MoquuM7aX_GqZEHOEIIyTBULxwYR
CYr_6tRG4ok7DUpsX528uh7N99_N0P2QJpcQjApry7sLOhOR_iW1PDDl7QTw
MNdheuaXLqcqElrl_n1Lmdd70nBf4vusB_9p799fwCttJElIskMR2HhrFIk1
vCWmhwsXBHyStv75-j7cpQVkpJ9X16ElNg/https%3A%2F%2Fgithub.com%
2Fgambit%2Fgambit.git>
cd gambit
./configure --enable-auto-force
make -j4
mv gsc/gsc gsc-boot
make bootclean
make -j4
sudo make install
Or do you suggest any other sequence or way? Should I use
"from-scratch" instead of "make bootclean" + "make"?
29 18:34 git clone https://github.com/gambit/gambit.git
<https://secure-web.cisco.com/1homFLxKQnlcP85KH6ygKE6lEkPYiK
xcHhttZ-GD8a4kbxAqz92WpuVjIm6ikIv6UPxFW5clU4WZjJdIxlmbhNR0ba
XZR0vJAt4Z8JLeUlHXGKOILBkDBSjGt317Y2kqUNI7KeGx154F3IhNZrTta_
58NQ7wxSItoMSjKHkHKlzC2sd0pYjzv8MoquuM7aX_GqZEHOEIIyTBULxwYR
CYr_6tRG4ok7DUpsX528uh7N99_N0P2QJpcQjApry7sLOhOR_iW1PDDl7QTw
MNdheuaXLqcqElrl_n1Lmdd70nBf4vusB_9p799fwCttJElIskMR2HhrFIk1
vCWmhwsXBHyStv75-j7cpQVkpJ9X16ElNg/https%3A%2F%2Fgithub.com%
2Fgambit%2Fgambit.git>
30 18:34 cd gambit
31 18:34 ./configure
32 18:34 make -j4 current-gsc-boot
35 18:37 ./configure --enable-single-host
--enable-auto-forcing
36 18:37 make -j 8 from-scratch
It fails Test 1 of "make check", but I don't know whether that's relevant.
Brad
Brad,
What you suggested now would be how to build the current beta SMP Gambit,
currently during its period of lots of deep changes, which is a transitory
period, right? -
I don't understand this, sorry.
(Nevermind.)
Post by Bradley Lucier
Let's nail how to do it in the 'ordinary' case too, so that would be in a
Post by Adam
./configure --enable-auto-forcing
make from-scratch
cp gsc/gsc ./gsc-boot
make clean
make
sudo make install
Again, I'm a bit confused. I recommend the sequence of commands I already
git clone https://github.com/gambit/gambit.git
cd gambit
./configure
make -j 4 current-gsc-boot
./configure --enable-single-host --enable-auto-forcing
make -j 4 from-scratch
make -j 4 doc
sudo make install
Ah, great.

The "current-gsc-boot" basically builds Gambit and then puts that
particular Gambit (./gsc/gsc) in ./gsc-boot (
https://github.com/gambit/gambit/blob/08730be98e86d15eae9da5e5de8cf1d2f9c353f0/makefile.in#L158).
Neat.

And the "from-scratch" makes a really deep wipe i.e. including the
pregenerated .C files, and then a total build (
https://github.com/gambit/gambit/blob/08730be98e86d15eae9da5e5de8cf1d2f9c353f0/makefile.in#L109
).

Neat.

Thanks for clarifying.

So this is the long term best practice, and any change of that at any point
would evoke discussion here on the ML.
Loading...