Realtime Servers¶
Supriya’s Server
provides a handle to a
scsynth
process, allowing you to control the process’s lifecycle, interact
with the entities it governs, and query its state.
Lifecycle¶
Instantiate a server with:
>>> server = supriya.Server()
Instantiated servers are initially offline:
>>> server
<supriya.contexts.realtime.Server object at 0x7f53ac9d52d0>
To bring an offline server online, boot the server:
>>> server.boot()
<supriya.contexts.realtime.Server object at 0x7f53ac9d52d0>
Quit a running server:
>>> server.quit()
<supriya.contexts.realtime.Server object at 0x7f53ac9d52d0>
Booting without any additional options will use default settings for the
scsynth server process, e.g. listening on the IP address 127.0.0.1
and port 57110
, and will automatically attempt to detect the location of the
scsynth binary via supriya.scsynth.find()
.
You can override the IP address or port via keyword arguments:
>>> server.boot(ip_address="0.0.0.0", port=56666)
<supriya.contexts.realtime.Server object at 0x7f53ac9d52d0>
Caution
Attempting to boot a server on a port where another server is already running will result in an error:
>>> server_one = supriya.Server()
>>> server_two = supriya.Server()
>>> server_one.boot()
<supriya.contexts.realtime.Server object at 0x7f53ac9d71f0>
>>> server_two.boot()
Received: *** ERROR: failed to open UDP socket: address in use.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/runner/work/supriya/supriya/supriya/contexts/realtime.py", line 483, in boot
self._process_protocol.boot(self._options)
File "/home/runner/work/supriya/supriya/supriya/scsynth.py", line 309, in boot
raise ServerCannotBoot(line)
supriya.exceptions.ServerCannotBoot: *** ERROR: failed to open UDP socket: address in use.
Use find_free_port()
to grab a random unused port to
successfully boot:
>>> server_two.boot(port=supriya.osc.find_free_port())
<supriya.contexts.realtime.Server object at 0x7f53ac9d5030>
You can also explicitly select the server binary via the executable
keyword:
>>> server.boot(executable="scsynth")
<supriya.contexts.realtime.Server object at 0x7f53ac9d52d0>
The executable
keyword allows you to boot with supernova
if you have it available:
>>> server.boot(executable="supernova")
<supriya.contexts.realtime.Server object at 0x7f53ac9d52d0>
Boot options¶
scsynth can be booted with a wide variety of command-line arguments,
which Supriya models via an Options
class:
>>> supriya.Options()
Options(
audio_bus_channel_count=1024,
block_size=64,
buffer_count=1024,
control_bus_channel_count=16384,
executable=None,
hardware_buffer_size=None,
initial_node_id=1000,
input_bus_channel_count=8,
input_device=None,
input_stream_mask='',
ip_address='127.0.0.1',
load_synthdefs=True,
maximum_logins=1,
maximum_node_count=1024,
maximum_synthdef_count=1024,
memory_locking=False,
memory_size=8192,
output_bus_channel_count=8,
output_device=None,
output_stream_mask='',
password=None,
port=57110,
protocol='udp',
random_number_generator_count=64,
realtime=True,
remote_control_volume=False,
restricted_path=None,
sample_rate=None,
threads=None,
ugen_plugins_path=None,
verbosity=0,
wire_buffer_count=64,
zero_configuration=False,
)
Pass any of the named options found in Options
as
keyword arguments when booting:
>>> server.boot(input_bus_channel_count=2, output_bus_channel_count=2)
<supriya.contexts.realtime.Server object at 0x7f53ac9d52d0>
Multiple clients¶
SuperCollider support multiple users interacting with a single server simultaneously. One user boots the server and governs the underlying server process, and the remaining users simply connect to it.
Make sure that the server is booting with maximum_logins
set to the max
number of users you expect to log into the server at once, because the default
login count is 1:
>>> server_one = supriya.Server().boot(maximum_logins=2)
Connect to the existing server:
>>> server_two = supriya.Server().connect(
... ip_address=server_one.options.ip_address,
... port=server_one.options.port,
... )
Each connected user has their own client ID and default group:
>>> server_one.client_id
0
>>> server_two.client_id
1
>>> print(server_one.query_tree())
NODE TREE 0 group
1 group
2 group
Note that server_one
is owned, while server_two
isn’t:
>>> server_one.is_owner
True
>>> server_two.is_owner
False
Supriya provides some very limited guard-rails to prevent server shutdown by
non-owners, e.g. a force
boolean flag which non-owners can set to True
if they really want to quit the server. Without force
, quitting a non-owned
server will error:
>>> server_two.quit()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/runner/work/supriya/supriya/supriya/contexts/realtime.py", line 743, in quit
raise UnownedServerShutdown(
supriya.exceptions.UnownedServerShutdown: Cannot quit unowned server without force flag.
Finally, disconnect:
>>> server_two.disconnect()
<supriya.contexts.realtime.Server object at 0x7f53ac58fb80>
Disconnecting won’t terminate the server. It continues to run from wherever
server_one
was originally booted.
Inspection¶
Server
provides a number of methods and
properties for inspecting its state.
>>> server = supriya.Server().boot()
Inspect the “status” of audio processing:
>>> server.status
StatusInfo(actual_sample_rate=44113.384030730675, average_cpu_usage=0.22797448933124542, group_count=2, peak_cpu_usage=0.8412342667579651, synth_count=0, synthdef_count=32, target_sample_rate=44100.0, ugen_count=0)
Hint
Server status is a great way of tracking scsynth’s CPU usage.
Let’s add a synth - explained soon - to increase the complexity of the status output:
>>> synth = server.add_synth(supriya.default)
>>> server.status
StatusInfo(actual_sample_rate=44113.15748876621, average_cpu_usage=0.3941679000854492, group_count=2, peak_cpu_usage=0.8412342667579651, synth_count=0, synthdef_count=32, target_sample_rate=44100.0, ugen_count=0)
Note that synth_count
, synthdef_count
and ugen_count
have gone up
after adding the synth to our server. We’ll discuss these concepts in
following sections.
Querying the node tree with query()
will return a “query tree” representation, which you can print to generate
output similar to SuperCollider’s s.queryAllNodes
server method:
>>> server.query_tree()
QueryTreeGroup(node_id=0, children=[QueryTreeGroup(node_id=1, children=[])])
>>> print(_)
NODE TREE 0 group
1 group
Access the server’s root node and default group:
>>> server.root_node
RootNode(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=0, parallel=False)
>>> server.default_group
Group(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=1, parallel=False)
And access the input and output audio bus groups, which represent microphone inputs and speaker outputs:
>>> server.audio_input_bus_group
BusGroup(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=8, calculation_rate=CalculationRate.AUDIO, count=8)
>>> server.audio_output_bus_group
BusGroup(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=0, calculation_rate=CalculationRate.AUDIO, count=8)
Interaction¶
The server provides a variety of methods for interacting with it and modifying its state.
You can send OSC messages via the
send()
method, either as
explicit OscMessage
or
OscBundle
objects, or as
Requestable
objects:
>>> from supriya.osc import OscMessage
>>> server.send(OscMessage("/g_new", 1000, 0, 1))
Many interactions with scsynth don’t take effect immediately. In fact,
none of them really do, because the server behaves asynchronously. For
operations with significant delay, e.g. sending multiple SynthDefs or reading/writing buffers from/to disk, use
sync()
to block until all previously
initiated operations complete:
>>> server.sync()
<supriya.contexts.realtime.Server object at 0x7f53ac58c100>
Note
See Open Sound Control for more information about OSC communication with the server, including OSC callbacks.
The server provides methods for allocating nodes (groups and synths), buffers and buses, all of which are discussed in the sections following this one:
>>> server.add_group()
Group(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=1000, parallel=False)
>>> server.add_synth(supriya.default, amplitude=0.25, frequency=441.3)
Synth(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=1001, synthdef=<SynthDef: default>)
>>> server.add_buffer(channel_count=1, frame_count=512)
Buffer(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=0, completion=Completion(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, moment=Moment(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, seconds=None, closed=True, requests=[(AllocateBuffer(buffer_id=0, frame_count=512, channel_count=1, on_completion=None), ...)]), requests=[]))
>>> server.add_buffer_group(count=8, channel_count=2, frame_count=1024)
BufferGroup(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=1, count=8)
>>> server.add_bus()
Bus(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=0, calculation_rate=CalculationRate.CONTROL)
>>> server.add_bus_group(count=2, calculation_rate="audio")
BusGroup(context=<supriya.contexts.realtime.Server object at 0x7f53ac58c100>, id_=16, calculation_rate=CalculationRate.AUDIO, count=2)
>>> print(server.query_tree())
NODE TREE 0 group
1 group
1000 group
Resetting¶
Supriya supports resetting the state of the server, similar to
SuperCollider’s CmdPeriod
:
>>> server.reset()
<supriya.contexts.realtime.Server object at 0x7f53ac58c100>
>>> print(server.query_tree())
NODE TREE 0 group
1 group
You can also just reboot the server, completely resetting all nodes, buses, buffers and SynthDefs:
>>> server.reboot()
<supriya.contexts.realtime.Server object at 0x7f53ac58c100>
Async¶
Supriya supports asyncio event loops via
AsyncServer
, which provides async
variants of many Server
’s methods. All
lifecycle methods (booting, quitting) are async, and all getter and query
methods are async as well.
>>> import asyncio
>>> async def main():
... # Instantiate an async server
... print(async_server := supriya.AsyncServer())
... # Boot it on an arbitrary open port
... print(await async_server.boot(port=supriya.osc.find_free_port()))
... # Send an OSC message to the async server (doesn't require await!)
... async_server.send(["/g_new", 1000, 0, 1])
... # Query the async server's node tree
... print(await async_server.query_tree())
... # Quit the async server
... print(await async_server.quit())
...
>>> asyncio.run(main())
<supriya.contexts.realtime.AsyncServer object at 0x7f53ac58e170>
<supriya.contexts.realtime.AsyncServer object at 0x7f53ac58e170>
NODE TREE 0 group
1 group
1000 group
<supriya.contexts.realtime.AsyncServer object at 0x7f53ac58e170>
Use AsyncServer
with
AsyncClock
to integrate with
eventloop-driven libraries like aiohttp, python-prompt-toolkit and
pymonome.
Lower level APIs¶
You can kill all running scsynth
processes via supriya.scsynth.kill()
:
>>> supriya.scsynth.kill()
Get access to the server’s underlying process management subsystem via
process_protocol
:
>>> server.process_protocol
<supriya.scsynth.SyncProcessProtocol object at 0x7f53ac58c5e0>
Get access to the server’s underlying OSC subsystem via
osc_protocol
:
>>> server.osc_protocol
<supriya.osc.ThreadedOscProtocol object at 0x7f53ac58e7d0>
Note
Server
manages its scsynth
subprocess and OSC communication via
SyncProcessProtocol
and
ThreadedOscProtocol
objects while the
AsyncServer
discussed later uses
AsyncProcessProtocol
and
AsyncOscProtocol
objects.