Kind of Code http://kindofcode.com anything related to code Thu, 16 Jul 2015 17:45:03 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.2 Centralized SSH tunnel hub for Raspberry Pi /centralized-ssh-tunnel-hub-for-raspberry-pi/ /centralized-ssh-tunnel-hub-for-raspberry-pi/#respond Thu, 16 Jul 2015 17:45:03 +0000 /?p=60

The other week I was developing an application system which included distributing Raspberry Pi(or very similar SoC-devices) to physical locations which I would not have easy access to. The need to somehow get shell access to these devices became apparent right away. Not for the direct running and managing of the application itself, but for […]

The post Centralized SSH tunnel hub for Raspberry Pi appeared first on Kind of Code.

]]>
The other week I was developing an application system which included distributing Raspberry Pi(or very similar SoC-devices) to physical locations which I would not have easy access to. The need to somehow get shell access to these devices became apparent right away. Not for the direct running and managing of the application itself, but for monitoring and manual error management which will eventually be required. To accomplish this I developed the ssh-hub project which provides a way to centrally manage a set of reverse SSH tunnels for a set of Raspberry Pi’s as well as provide a simple way to execute shell commands on the remotely located devices.

In my use case the reverse SSH tunnels provides the same features that a dynamic DNS would; the ability to address each of the SoC terminals directly even through none of them have a static IP assigned. More then that; a reverse ssh tunnel also bypasses many of the network quirks that may lie between the Pi and the internet, such as NATs.

The ssh-hub service provides a crud-like REST interface for managing terminals in the set. Adding a terminal generates a linux user on the machine which runs the ssh-hub service. This operation generates two sets of RSA-keys. One pair for client to server access (with very limited privileges) which when installed on the Raspberry Pi allows it to establish a reverse SSH tunnel to the ssh-hub machine. The other key pair allows the ssh-hub to access the Raspberry Pi’s sshd through the open reverse SSH tunnel.

The create operation in the crud REST interface also generates a tar.gz package which needs to be transferred to the Raspberry Pi offband. You could potentially serve the client_package.tar.gz through the REST interface as well, but I haven’t figured out a safe way to do it. The package contains a script which creates the required user on the Raspberry Pi, installs the relevant ssh keys and ensures that a persistent ssh reverse tunnel is established from the Raspberry Pi and the ssh-hub server whenever the Raspberry Pi is started.

initialize_terminal_pi

Once this is done the POST method /terminals/[terminal_id]/run can be used to send plain/text HTTP body content containing arbitrary commands to the Raspberry Pi’s managed by the ssh-hub. Response body will contain the standard out and standard error of the executed command. Alternatively an administrator could use ssh access to the ssh-hub machine to gain direct ssh access to the different attached Pi units.

It is probably worth pointing out that it is not recommended to let the ssh-hub rest interface be publicly accessible. It is currently only secured by basic-auth .

When I tested the system the ssh-hub instance was installed on an ubuntu server and the Raspberry Pi units ran a recent version of the raspbian distribution (debian based).

P.S. the head of the ssh-hub repository is not yet completely tested.I do not have access to all the hardware required. Will update the post as soon as this is done. I have a working copy. It is just not as pretty as the head. If you intend to use it right away, be ready for some tweaking.

The post Centralized SSH tunnel hub for Raspberry Pi appeared first on Kind of Code.

]]>
/centralized-ssh-tunnel-hub-for-raspberry-pi/feed/ 0
Stomp Client Multiplexing /stomp-client-multiplexing/ /stomp-client-multiplexing/#respond Sun, 22 Mar 2015 19:42:49 +0000 /?p=54

WebSockets are great. Even when they are not supported we can usually make due with shim technologies such as sockjs to provide us with the main features offered by WebSockets. Something with neither WebSockets or the shim technologies supports however is the ability to maintain several clients from one browser session (granted this is a bit […]

The post Stomp Client Multiplexing appeared first on Kind of Code.

]]>
WebSockets are great. Even when they are not supported we can usually make due with shim technologies such as sockjs to provide us with the main features offered by WebSockets. Something with neither WebSockets or the shim technologies supports however is the ability to maintain several clients from one browser session (granted this is a bit beyond the websocket specification; but it is still a fact we must cope with).

Why is this an issue? An example of the problems that may arise:

An application is using stomp over WebSockets such as is provided by the RabbitMQ Web Stomp Plugin. In a moderately complex application several independent modules would need WebSockets access to listen queues and send messages. The simplest approach would be to let each module initialize their own client to minimize the inter-dependencies between modules. This is prevented by many browser WebSocket implementations, as well as by limitations in the shim libraries due to limitations in the number of http connections. Shim libraries usually have both long and short HTTP polling as a fallback strategy.

To get around this we can implement multiplexing in the client application to simulate the ability to maintain several concurrent WebSocket connections. I made a quick attempt at this in an angular context using the sockjs client and the stomp client library recommended by RabbitMQ.

.service('Guid',[function() {
        return {
            generate : function() {
                function s4() {
                    return Math.floor((1 + Math.random()) * 0x10000)
                        .toString(16)
                        .substring(1);
                }
                return s4() + s4() + '-' + s4() + '-' + s4() + '-' +
                    s4() + '-' + s4() + s4() + s4();
            }
        }
    }])
    .service('StompClientFactory', ['$q', 'Guid', function($q, Guid) {
        var _self = {};

        var ws = new SockJS('http://' + window.location.hostname + ':15674/stomp');
        var stompClient = Stomp.over(ws);
        // SockJS does not support heart-beat: disable heart-beats
        stompClient.heartbeat.outgoing = 0;
        stompClient.heartbeat.incoming = 0;

        var deferred = $q.defer();

        //The stomp interface presented to the rest of the angular app
        function makeWrappedClient(stompClient) {
            var that = {};
            var listenerIds = [];
            function makeListenerId() {
                var listenerId = Guid.generate();
                listenerIds.push(listenerId);
                return listenerId;
            };
            that.makeSender = function(sendDestination) {
                return function(data) {
                    stompClient.send(sendDestination, {'content-type':'application/json'}, JSON.stringify(data));
                };
            };
            that.makeTempSender = function(sendDestination, callback) {
                var listenerId = makeListenerId();
                var temporaryQueue = '/temp-queue/'+listenerId;
                _self.registerListener(listenerId, temporaryQueue, callback);
                return function(msgBody) {
                    stompClient.send( sendDestination, {
                        'reply-to': temporaryQueue,
                        'content-type':'application/json'
                    }, JSON.stringify(msgBody));
                };
            };
            that.send = function(sendDestination, msgBody) {
                stompClient.send(sendDestination, {'content-type':'application/json'}, JSON.stringify(msgBody));
            };
            that.listenTo = function(receiveDestination, callback) {
                var listenerId = makeListenerId();
                _self.registerListener(listenerId, receiveDestination, callback);
                _self.stompListenTo(receiveDestination);
            };
            that.destroy = function() {
                angular.forEach(listenerIds, function(listenerId) {
                    _self.removeListener(listenerId);
                });
            };
            return that;
        };

        //Listeners are the local callbacks, they are all associated with a destination
        var listeners = {};
        _self.registerListener = function(id, destination, func) {
            listeners[id] = {
                destination : destination,
                listenFunction : func
            };
        };
        _self.removeListener = function(id) {
            delete listeners[id];
        };

        //Here is where the local routing happens.
        function notifyListeners(msg) {
            angular.forEach(listeners, function(listener) {
                if(listener.destination === msg.headers.destination) {
                    msg.body = JSON.parse(msg.body);
                    listener.listenFunction(msg);
                }
            });
        };
        _self.stompListenTo = function(destination) {
            stompClient.subscribe(destination, function(d) {
                notifyListeners(d);
            });
        };

        var on_connect = function(x) {
            deferred.resolve(makeWrappedClient(stompClient));
        };
        var on_error =  function() {
            deferred.reject("Failed to connect to message broker");
        };

        //Called when messages are received on a temporary channel
        stompClient.onreceive = function(m) {
            notifyListeners(m);
        };

        stompClient.connect('guest', 'guest', on_connect, on_error, '/');

        return deferred.promise;
    }])

And then using it.

        StompClientFactory.then(function success(client){
            console.log('connected to rabbitmq');
            
            //using it wih permanent queues
            client.listenTo('/amq/queue/a-client', function(data) {
                console.log('got this on a-client queue:');
                console.log(data.body);
            });
            var send = client.makeSender('/amq/queue/a-client');
            send({
                msg: '123',
                status: true
            });
            
            //Using it with temporary queues
            client.listenTo('/queue/temporary-client', function(msg) {
                console.log('got this on tqueue queue:');
                console.log(msg.body);
                client.send(msg.headers['reply-to'], {
                    msg: "got the message on the temporary queue"
                });
            });

            var temporarySender = client.makeTempSender('/queue/temporary-client',function(d){
                console.log("Got this back on the tmp queue"+d);
                console.log(d);
            });
            temporarySender({
                msg: "temporaryMsg"
            });
        }, function error() {
            console.log('failed to connect to rabbitmq');
        });

Will add a git repo which contains this example code. Will probably refine it a bit until then

The post Stomp Client Multiplexing appeared first on Kind of Code.

]]>
/stomp-client-multiplexing/feed/ 0
Ionic Hot Code Push /ionic-hot-code-push/ /ionic-hot-code-push/#comments Sun, 01 Mar 2015 16:09:29 +0000 /?p=41

There are several benefits with being able to hot code push updates for your app to users without having to wait for their acceptance or manual intervention. One major benefit is that it makes it possible to instantly patch critical bugs which an app may have; another that it makes it possible to ensure that the […]

The post Ionic Hot Code Push appeared first on Kind of Code.

]]>
There are several benefits with being able to hot code push updates for your app to users without having to wait for their acceptance or manual intervention. One major benefit is that it makes it possible to instantly patch critical bugs which an app may have; another that it makes it possible to ensure that the app and the backend is always in sync. That is, you do not need to maintain indefinite backward compatibility in your app backend.

This is likely the most significant benefit to most organizations. Maintaining backward compatibility is complicated; and the complexity grows with the number of deployed versions of the system. One huge benefits of a traditional web based system is that of complete version synchronization between client and backend. With native mobile app systems this is most often not the case and the released versions of the apps can lag behind the version of the deployed backend.

But by ensuring that all instances of the mobile app is running the latest software we can avoid the requirement of constant backward compatibility, and thereby reduce the complexity and cost of the overall system.

Several frameworks already offer the ability to push code to mobile applications. For instance:

meteor – Offer hot code push an several other outstanding features! But meteor also places a number or requirements on the structure of the client app and server backend. Therefore it may not be suitable for a project if all you are interested in is hot code push.

Trigger.io and AppGuyver – Two commercial alternatives. Which both offers a lot more than just hot code push.

Native mobile application do not offer a simple way to perform hot code updates. More importantly the iOS platform explicitly forbids it as seen in section 3.3.2 of their iOS Developer Program License Agreement (you have to be enrolled to be able to read it). But the same section in the License Agreement apparently allows hybrid applications to fetch code changes, as long as these changes do not change the character of the app and the code is run in a web view (which it is in the case ionic/cordova).

Since hybrid applications are run in a web view, simply a wrapped browser, it means that a hybrid app code could just as easily have been severed by a web server. This property is what needs to be utilized when creating a hot code push system.

I have created an example application here. The application is based on cordova/ionic and showcases a technique that allow us to push code updates to an already installed app automatically without requiring the user to accept the update. Granted, it only does so for Android (I do not own a Mac), but it should work in iOS with minimal modifications. The “push” is actually not initiated by the server, but rather each time the application is started it will actively check for updates against a remote server. This is performed by utilizing a HTML5 feature called application cache. The first thing the application does at initialization is to try and download a copy of the application cache manifest. The downloaded copy of the manifest is compared against the already stored version. If any changes are detected all the application resources(html, css and javascript) are downloaded from the remote server. If the application is unable to get a hold of the remote cache manifest then the cached version of the app will be used(html, css and javascript will be loaded from a local storage).

What this means is that the application, once loaded the first time, will keep working even though the terminal lacks network connectivity. The idea is simple, but there are a few quirks with the application cache specification which requires us to take some extra precautions. A HTML5 application which utilizes application cache will always load the resources from the cache first, and then update its cache with the newly downloaded versions of the resources if they are available. But it will not automatically reload the page with the fresh version of the application. This requires us to listen to cache events that tells us about the availability of a new version of the resources, and manually force an update by reloading the site, like so:

window.addEventListener('load', function(e) {
        if (window.applicationCache) {
            window.applicationCache.addEventListener('updateready', function(e) {
                if (window.applicationCache.status == window.applicationCache.UPDATEREADY) {
                    // Browser downloaded a new app cache, apply it
                    window.applicationCache.swapCache();
                    window.location.reload();
                } else {
                    // Manifest has not changed changed
                }
            }, false);
        }
    }, false);

Another thing which is a bit surprising is that the application cache really does not look for changes in the resources, but rather only for changes to the cache manifest. Therefore it is important to make sure to update the cache manifest across versions. It is sufficient to update a comment line if no files have been added or deleted.

Furthermore it is important to make sure that all the files listed in the cache manifest is actually available for download. If they are not, the caching process will terminate with an error and the script execution on the site will fail.

Other than that the implementation is very straight forward. If this was to be used in production some further precautions are likely needed. Maybe bundling a first version with the app bundle if the user does not have network access the first time the app is opened. This would prevent the ugly network failure page which is otherwise the result.

It is probably also a good idea to manage the transition between app versions a bit more gracefully than the method displayed in the github demo (which is simply reloading the app without notifying the user).

But as you can see it is absolutely possible to perform hot code updates in hybrid application with what is as hand today. If anyone have examples of this being done live based on open source software I would be very interested to hear about your experiences!

The post Ionic Hot Code Push appeared first on Kind of Code.

]]>
/ionic-hot-code-push/feed/ 1
Bower vs Nexus /bower-vs-nexus/ /bower-vs-nexus/#respond Fri, 20 Feb 2015 19:10:44 +0000 /?p=26

When learning something new the fastest way is often to compare it to something similar that you already know well. If you are familiar with either the dependency distribution system common in most JVM based systems, Nexus, or the front end JavaScript system bower, the other system should hopefully be a bit more familiar to […]

The post Bower vs Nexus appeared first on Kind of Code.

]]>
When learning something new the fastest way is often to compare it to something similar that you already know well. If you are familiar with either the dependency distribution system common in most JVM based systems, Nexus, or the front end JavaScript system bower, the other system should hopefully be a bit more familiar to you after you have read this post.

Both Bower and Nexus has the same common goal. To provide a distribution channel for prepared artifacts. However they go about the task in quite different ways.  The simplest way of explaining the differences between the two systems is to compare the two primary operations which they both share; retrieving artifacts and publishing artifacts.

NexusRetrieval

BowerRetieval

As you can see the nexus retrieval mechanism is straight forwards, only involving the nexus server and the client. In the case of bower three parts are involved in the transaction: the client, the bower registry and a git repository containing the artifact. This is the key in understanding the bower system. A bower registry simply acts as a map between an artifact id and the git repository which contains all versions of the artifact. The bower registry does not contain the version information, that is maintained as git tags in the git repository. This becomes apparent if we compare the publishing procedures:

NexusPublish

BowerPublish

The two systems are achieving the same goal, but through very different methods. Nexus is a fully fledged file repository; while bower is little more than a git repository registry and some git commands bundled into a client.

In practice this means that the bower system is much more lightweight than nexus and other centralized distribution systems, such as npm. The Bower registry does none of the heavy file transfer or storage. Instead it completely relies on the Git repositories for its distribution. But this requires that the Git repositories are publicly available without access control (or at least available to everyone which has access to the registry). It also means that the artifact intended for distribution is stored with the source code. One last thing to note is that in contrast to many other JavaScript distribution system bower is fully capable of handling both HTML and CSS, a well as JavaScript.

Hopefully the comparison of Nexus and Bower have shed some light on the the unfamiliar system!

Further reading:

http://chariotsolutions.com/blog/post/javascript-apps-bower-or-not-bower/

The post Bower vs Nexus appeared first on Kind of Code.

]]>
/bower-vs-nexus/feed/ 0
Web Workers in Angular /web-workers-in-angular/ /web-workers-in-angular/#comments Sat, 14 Feb 2015 22:56:06 +0000 /?p=5

Web Workers in Angular is simple in theory, but it requires a bit of tinkering to get them to feel like a part of the application. This post intends to explain a method which can make Web Workers in Angular (almost) seamless. With the introduction of web workers web developers finally had the ability to allow […]

The post Web Workers in Angular appeared first on Kind of Code.

]]>
Web Workers in Angular is simple in theory, but it requires a bit of tinkering to get them to feel like a part of the application. This post intends to explain a method which can make Web Workers in Angular (almost) seamless.

With the introduction of web workers web developers finally had the ability to allow long running, CPU intensive operations in the browser. These types of operations would otherwise have caused the GUI thread (the only thread up until this point) to block, and the GUI to freeze.

However the web worker syntax specification is a bit awkward, especially when combined with angularjs. To maintain complete backward compatibility and to protect developers from common parallel programming pitfalls the workers have a VERY limited exchange of information with the main browser thread. The main and web worker thread only communicate through message passing. All messages between the two are deep cloned (transferable objects can be used to minimize the overhead associated with the cloning). All this is well and good, it protects us from ourselves. But it makes for awkward programming.

Keeping the main and the web worker threads completely separated mean that resources loaded in the main thread is not loaded in the web worker thread automatically. In an angular application it means that if an angular app is using a web worker, the angular context is not automatically accessible in the web worker. If we do not load angular explicitly in the web worker it will only be capable of standard javascript. Whats worse is that a web workers needs to be loaded from a URL! What this normally means is that the developer specifies the javascript file that contains the web worker code like this:

var worker = new Worker('worker.js')

By using an addition to the web specification, which allows us to create object URL’s, we can get around the requirement of loading the web workers from separate files. This addition to the standard allows us to generate URL’s for snippets if blobs. Blobs can be pretty much anything, but we are interested in text that can be executed as javascript. A good explanation of this technique can be found here.

var blobURL = URL.createObjectURL(new Blob([
    "var i = 0;//web worker body"
    ], { type: 'application/javascript' }));
var worker = new Worker(blobURL);

We can not, and do not want to, get around the fact that the main and worker threads have separate contexts. But we can mitigate this inconvenience by this by leveraging the powerful dependency injection framework that angular ships with.

I have created an attempt at this here: https://github.com/FredrikSandell/angular-workers. angular-workers provide an angular service which takes a list containing dependencies, a function and returns a web worker, called AngularWorker, based on the functions source. The process is visualized in the sequence diagram below.

AngularWorkerCreation

Once the promise of an AngularWorker has been resolved the application is able to repeatedly use the initialized AngularWorker for heavy duty work. The angular worker communicates the result to the main thread through an angular promise. Within the web worker function we are ensured that an angular context exists and that the dependencies which we specified at initialization are present.

AngularWorkerExecution

A potential use case for this is to perform CPU intensive processing of large AJAX responses without locking the main GUI thread. This is the problem which first made me investigate this issue. The angular services which are used in the web worker can be handled as any other service and can therefore be tested as any other part of the angular application!

The post Web Workers in Angular appeared first on Kind of Code.

]]>
/web-workers-in-angular/feed/ 15