Membrane SoftwareTechnology development studio
Discovering network appliances
Tuesday, March 28 2017 01:12 GMT
Posted by: MembraneTags:programmingraspberry pi
Appliances
A toaster is an appliance: it does one job well and is easily replaced if broken. A PC is not an appliance: it does many jobs, often not well, and is for most a nightmare to replace. But what if we could divide all of a PC's jobs among a set of reliable networked appliances?
The word appliance might conjure images of washing, blending, or garbage disposing gizmos, but in the area of computing we can use the concept to describe a device with the best quality of any appliance in your kitchen: it has one job to do, and does it perfectly every time with the absolute minimum help from you. At Membrane HQ, we have dozens of computing devices performing a wide array of tasks, and treat as many of them as possible like appliances. Raspberry Pi boards, in particular, are extremely useful for this type of work because they're easily programmable and can connect to other devices via Ethernet, module boards, or custom soldering. Armed with a board that can be programmed with arbitrary logic and then put in control of an arbitrary slave device, it's possible to create an appliance for absolutely any task if we put our minds to it. And what with Raspberry Pi boards being so inexpensive, we gain even more desirable appliance-like attributes: we can easily replace one that breaks, and we can get more of them if we have too much work for only one to handle.
A system of agents
Tools
Each appliance in our system runs an agent: a software process responsible for controlling its functions. An agent has certain important responsibilities to fulfill if its host appliance is to be a contributing member to the system.
  • The agent must be able control our desired functions on its host device. Usually, this means someone with access to the device has installed and configured the agent software. For example, an agent in charge of streaming video from a set of media files would need to be configured with access to a directory containing those files.
  • Since we'd like to direct our agents remotely, each agent must have a way to accept commands over the network. For easier setup of appliances, we'd also like each agent to make itself automatically discoverable on the network.
  • When we're not able to control an agent due to a network disruption, the agent must be autonomous enough to continue doing its task until communication re-establishes.
Note that our description of a software agent is imprecise in certain ways, and that's quite intentional. In particular, an agent is not defined in terms of any individual programming language or platform. It's up to us to choose the best language and platform for the hardware we expect the agent to run, as well as to adapt and change if today's best language and platform fall to second-best or below later on.
Taken together, our set of expectations adds up to what we want from an appliance: able to do its job autonomously as long as it's powered up, while also accepting commands over the network. It requires competent programming effort to create reliable agent software for each appliance, but if done correctly the result is worth it. By building a network of autonomous agents, each with its own capabilities and controlled from a master console, we enable untold numbers of interesting applications.
Making discoveries
Membrane agents are set up to listen on the network for commands. One of those commands is ReportStatus, a simple request for the receiver to report status information to an included URL. When an agent wishes to find others on the network, it broadcasts this command and all receivers report back.
Discovery diagram
Sending a broadcast message to discover agents. Any agent on the same network receives the message and responds with status data, including its list of capabilities.
An agent's status report includes a list of its supported capabilities, each of which indicates the presence of a related set of commands. For example, an agent with an attached camera might report that it has the CameraControl capability, meaning it accepts commands such as CaptureImage and CaptureVideo. Another agent with an attached display might possess the DisplayControl capability, meaning it accepts commands such as ShowImage and PlayVideo. Yet a third agent might have access to a camera as well as a display, and therefore accept both sets of commands.
Of Pi's and PC's
We've spent the article so far imagining our ideal system of connected appliances: a set of agent processes, each making itself available on the network while being able to control whatever interesting devices are present on its hardware. Talking about ideals is fun, but to bring our vision into reality we need to get specific about ways and means.
The first task we'll ask of our system is to grant us control over set of displays. We'd like to send a display commands that cause it to play video from a media server. Once simple playback commands are in place, more complex functions such as loops and shuffles can be built on that foundation.
Monitors
Each display in the system is an ordinary computer monitor. These seem to be getting less and less expensive every time I look; right now I'm seeing one site that has 24-inch LED backlight monitors on sale for $109.99.
Raspberry Pi closeup
We use a Raspberry Pi device to drive the display on each monitor via an HDMI connection. To program our agent process on the Raspberry Pi, we've selected a stack of free software.
  • Raspbian: As the official Linux distribution for the Raspberry Pi, Raspbian serves as our base operating system. Linux has a good track record for stability, and also supports just about any platform we'd want to use for programming agents.
  • Node.js: A runtime environment that grants a programmer control over many system functions, including ones important to us such as networking and launching child processes. Programming in Node.js uses an asynchronous, event-driven model that we find ideal for a system of agents.
  • Omxplayer: A video player application that runs on Raspbian. Omxplayer supports streaming media playback via RTMP, RTSP, and HTTP Live Streaming, thereby letting us play videos from a media server instead of requiring local storage on each Raspberry Pi device.
On a Raspberry Pi, then, our system agent takes the form of a Node.js application running on Linux and able to launch Omxplayer on command. The Raspberry Pi devices themselves are rather inexpensive, and sell for $35 each.
PC on a table
We want to control the Raspberry Pi displays using a graphical interface. To do so, we've chosen to implement an agent on the PC, again making use of a free software stack.
  • Simple DirectMedia Layer: SDL provides our application with a window and rendering routines as well as other core functions, and supports the major desktop operating systems: Windows, macOS, and Linux. SDL claims Android and iOS support as well, which should come in handy when the time comes for mobile apps.
  • FreeType: Applications need text, but rendering fonts can be a finicky business. The FreeType library helps us by parsing TTF (True Type Font) files and providing the resulting bitmaps. As for the fonts themselves, Google Fonts offers quite a variety of fonts for general use.
  • FFmpeg: Our display control functions involve streaming audio and video, and FFmpeg has long been the state of the art for media processing. In this system, it helps by preparing video files for streaming, pulling thumbnails from videos, decoding and re-encoding media data, and more.
On a PC, then, our system agent takes the form of an SDL application that presents the user with a graphical interface while also broadcasting messages to find available agents. When an agent is found, the application checks its capabilities and makes its controls available to the user.
Membrane Control: a work in progress
The node application for our Raspberry Pi agent already has much of its software in place. In demo videos, we show how it's possible to control video playback and camera capture from a tablet application.
Membrane Control, our PC agent application, is in development. A few thoughts about its design were briefly described in a previous blog post. At that time, we had Membrane Control set up with an "Address" bar and a "Connect" button, expecting the user to enter the name of an agent to control. However, we have now moved to the discovery scheme outlined above, removing the need for manual entry of names.
Membrane Control screenshot 1
If we start Membrane Control with no Raspberry Pi agents, it shows an empty node map.
Membrane Control screenshot 2
After starting two agents, Membrane Control automatically discovers them. In this case, we see that it shows one agent with both media server and display control capability, and another one with just media server capability. To move toward our goal of easy control over video playback, the next thing this interface needs is a way to browse items from those media servers. From there, we can look for available display control agents and let the user send them commands for playback.
Notes
 
What did you think of this article? Leave a comment or send us a note. Your feedback is appreciated!