Tuesday, December 12, 2017

A week with Apple Watch

So, here I am. After avoiding to get a watch in the first place, I recently bought an Apple Watch (Series 1) for myself. I didn't go for Series 3 because I am not really a swimmer or runner and the Series 3 doesn't actually offer substantially more in terms of features against the price difference. With the difference in price, you can actually purchase AirPods as well.

In 2013, months before Apple released their first Watch model - I had written a post stating why calls, text and tweets won't define a smartwatch (see. It is not calls, text and tweet that would make a smartwatch) - which I am pleased that I wrote - and I am so right in every aspect of what I wrote there. Apple with its Watch, initially had a mis-step. It tried to position itself as a luxury watch maker, failed, and quickly pivoted its strategy to what a wearable watch truly made sense: tell time, track fitness, have a quick way to call up digital assistant, and 3rd party apps to extend the functions not in the core system.  When I see the Siri watch face on my watch - I can't help but pat myself of how close is this interaction model to what I described in the article above :)



The interaction model that I proposed and the Siri Watch face have so much in common (see http://tovganesh.blogspot.in/2013/09/it-is-not-calls-text-and-tweet-that.html).



Third party apps are there, but are still a long way to go.

There is still a lot to improve until we really have a wearable computer that doesn't look like a piece of brick, and one whose battery lasts for at least a full day of heavy use. The Series 3 with LTE is definitely not that one device as Joanna from WSJ notes in her review of the latest iteration of the watch that I didn't get (https://www.wsj.com/articles/apple-watch-series-3-review-untethered-and-unreliable-1505905203).

For one thing is sure, smartwatches are here to stay. It is only to be seen if they take as much time as smartphones to evolve or would we see substantial breakthroughs in a much shorter period. 

Monday, November 20, 2017

Using the iPhone for programming




I have been using my iPhone like a computer for some time now. The primary thing I do with my computer is programming. I dislike laptops and more dislike carrying around one. Over about 2 months ago - I experimented using the iPad as my primary go to computer. With the multitasking enhancements introduced in iOS 11, I could pretty much use it as a primary computer with a number work apps installed: Terminus (for ssh to development Linux server), Pythonista (for a fantastic on device python interpreter with a number of libraries I use - numpy to be specific, Working Copy (for managing git repositories), Textastic (the most fantastic source code editor for iOS). With these apps in place my next quest was to see if I could manage even without the iPad around. This is my week 2 of the experiment and I think I haven’t faced a lot of issue for on the go programming. These tools just work great for me. Now I can pretty much keep my laptop at home and use the desktop at work, while on the move I just use my phone. There are a few things like join.me and teamviewer that may just work better on a bigger screen, but then I can also connect my phone using the lightning to vga dongle that I sometimes carry - if there is really this need. 

Oh - and did I tell you that I wrote this post on the same phone ;) 

Peace. 

Friday, August 25, 2017

Programming in Devanagari [Revisited]

Exactly a decade ago, I wrote this post - http://tovganesh.blogspot.in/2007/08/programming-in-and-for-devanagari.html. I was exploring JavaFX released by Sun Microsystems back then. I am no longer using JavaFX actively. But a decade later I am exploring Go. And the first code I wrote today morning was this:

package main

import "fmt"

func main() {
fmt.Println("ॐ नमो भगवते वासुदेवाय")
}

So just thought of reconnecting with a decade old post. Idea stays, the mode has changed. 

Tuesday, August 01, 2017

Simple script to extract final GAMESS geometry

Am dabbling with QM codes again, so I needed this quick script without much baggage of other dependencies, so wrote a quick one in Python. You can get this from Github: https://github.com/tovganesh/myrepo/blob/master/extractConvergedGeometry.py

I will call these scripts - quick and useful scrips (QUS) - hence forth and post others when I feel the need :)

Friday, June 30, 2017

Count number of lines for each PDF in a folder

This is just a note about a script which may be useful to you. This one calculates the number of lines per PDF and prints the final count.

import sys

import fnmatch
import os

matches = []
for root, dirnames, filenames in os.walk(sys.argv[1]):
   for filename in fnmatch.filter(filenames, '*.pdf'):
       matches.append(os.path.join(root, filename))

count = 0
for mat in matches:
   if not mat.lower().endswith("pdf"): continue
   cmd = "pdftk " + mat +  " dump_data | grep NumberOfPages > pn.log"
   os.system(cmd)
   try:
     f = open("pn.log")
     l = f.read().strip().split(":")[1].strip()
     f.close()
     print(mat + "," + l)
     count = int(l) + count
   except:
     continue

print(count)


Have a great weekend ! :)

Tuesday, June 06, 2017

On "The Computer's Common Sense"

Background
On the surface of it, this is a followup of blog "The Computer's Common Sense" [read here: https://rulasense.wordpress.com/2017/05/] by my friend AKD (https://twitter.com/alok_damle) who is passionate about building a new kind of intelligent system. This is also about my understanding of the machine learning tools that I have used in my work at VLife (which is now Novalead Pharma). These are the thoughts that are coming from a beginner to intermediate person with ML background, so this is more of a learning via conversation exercise for me, and more philosophically skewed rather than looking technically deep.

Artificial Intelligence vs Human Intelligence (commonly called common sense)
AKD starts of his blog with a title that makes you think a bit. It seems to equate Human Intelligence [W1] with common sense [W2]. To me however, common sense (of how uncommon it is), is one part of human intelligence, it is not the only form of intelligence that humans have. Further common sense, as the name suggests, is not something specific to an individual, but has evolved over time from a group of individuals, representing common knowledge - or to put it in other words it is "ensemble intelligence" rather than something that represents and individual humans. Thus, I feel that human intelligence is a combination of many factors - only one of which is common sense. The decisions that humans take is a cumulative effect of various factors.

RULA – Read Understand Learn Apply
If we get past that oversight, some of things being to make sense to me. The example of screw driver (https://rulasense.wordpress.com/2017/05/17/artificial-intelligence-vs-common-sense/) kind of makes sense for the current state of art on AI. It is mostly possible that no AI will suggest using your finger nails instead of screwdriver! *. But the reason for this is probably to do with other environmental factors that the human is in. The human brain, more often than not tries to correlate the present situation with the past situations it has encountered (when in isolation), or it tries to correlate with what others have discovered when being in similar situation (the common sense part). In isolation, a human brain probably works by "read (or observe) - understand - learn and apply" cycle, but that may not be the case always. The second term "understand" is kind of misnomer here - because one can short this with "read (or observe) - learn and apply", with "understanding" coming at a later stage - probably a far later stage. A lot of what we humans do probably translates to "read (or observe) - learn - apply". For instance, take any kid, he observers his parents, tries to learn from them, and then do similar things. He doesn't understand what he does till he grows up. Thus I feel, "understanding" comes after a series of reinforcement learning and application to what was observed. Evidently a lot of AI at the moment is focused on "read (or observer) - learn - apply" cycle and probably never come to the point of "understanding". Deep learning, may however be the ones that actually bring understanding to this process [W3].

Machine Learning vs Human Learning
That brings me to the next part of the blog, which is kind of generically titled. I think the core theme of this section is to bring home a point that most of the AI today is basically data driven. Human learning however can happen at a much superior pace and doesn't need as much data. This is quite true. But I think that this is rather possible because not only the human brain is one, but our brains are connected as a lot with other intelligent beings - and this collective brain power, which is essentially to a large extent what "common sense" encompasses - influences our individual brain learning capabilities. The "collective brain power" is not necessarily of humans, it would be be from any other form of intelligence behaviour - other animals, or even insects. Human brain is capable of capturing and basing its learning on information acquired by other intelligence forms. A counter point to the kids example above, is how often we find that the little ones think differently to what is previously conceived. That, I feel is because the kid's brain is kind of "disconnected" from the "collective brain power", that prompts the brain to potentially discover new ways to solve a problem - which an adult's brain just defaults to "common sense" part.

AI at the moment is limited to what humans feed it with. It doesn't have unrestricted access to the environment outside - as we humans have. Whether that is a shortcoming of current AI or if the AI as is implemented today needs fundamental rethink is what is yet to be seen. AKD thinks that there is an alternate way that is not yet explored. I await to see what is that.


NOTES:
* I am not sure how IBM Watson[R1] will respond - because Watson is a totally different take and at edge of AI research today, and that it could beat humans in the game of Jeopardy! is anything but amazing.

References:
R1) IBM Watson: https://www.ibm.com/watson/
R2) L. Deng, G. Tur, X. He, and D. Hakkani-Tur. "Use of Kernel Deep Convex Networks and End-To-End Learning for Spoken Language Understanding," Proc. IEEE Workshop on Spoken Language Technologies, 2012

Wikipedia:
W1) Human Intelligence https://en.wikipedia.org/wiki/Human_intelligence
W2) Common Sense https://en.wikipedia.org/wiki/Common_sense
W3) IBM Watson https://en.wikipedia.org/wiki/Watson_(computer)

Friday, June 02, 2017

Serval Project: Carrier independent network

Almost 5 years back, while putting in an my idea of building a mobile experience for myself, I had suggested a carrier independent network is what I want - something what will not only distribute the need to create infrastructure but also free us from lousy carrier plans and create a world where communication is free between humans where ever they are. [Ref: http://tovganesh.blogspot.in/2012/01/kosh-building-mobile-user-experience.html]. Obviously carrier based / satellite systems are necessary for emergency situations - but our reliance on them could definitely be minimised.

So when I saw the Serval Project (http://www.servalproject.org/), I was pleasantly surprised that they have exactly the same goal. More over, instead of building a whole new OS as I earlier proposed they are going for a more practical solution of putting it in an Android app. This is a big shout out to you guys developing the Serval Project. It is like a moment when you feel that you are not alone in the thoughts you have about how to make things different in this world. I have just installed the app on my secondary Android device and tested it with a friend. Though the interface is quite primitive at this stage, and the call quality not upto mark - it works. It is experimental and yes, it will improve.

After digging a bit into history of Serval Project (http://developer.servalproject.org/dokuwiki/doku.php?id=content:about), I discovered that it was proposed almost 2 years before I had written the above mentioned article and an early system was used for emergency response during Haiti Earthquake.

The Serval Project is also opensource (https://github.com/servalproject) and in the coming days I plan to explore this project more in depth to see if I can contribute in some way here.

Meanwhile any one should be able to install the app from Google Play (https://play.google.com/store/apps/details?id=org.servalproject), and be the part of experiment and the quest to build a carrier independent backup network. 

Saturday, April 01, 2017

This is how birds enjoy water ...





This is how much the sunbirds in my garden enjoy even with the little water you share with them this summer. They are pretty cool :)

Musings on Nature around Gandhkosh

Some of shots of nature taken around DSK Gandhkosh, my work home over a period of few months. Hope you like it :)

















Friday, February 24, 2017

A "Generic Component" in Angular 2

I am generally a lazy person. Especially so when it comes to writing some UI code (although I can have endless comments and critiques on someone else's UI and UX ;-)) . I find it repetitive. And any form of repetitive behaviour is ripe for automation. One such activity which I recently encountered in a project I was doing was creating forms - a lot of them - with Angular 2 front end. I remember in good old MS Access days these were just a click away - although I disliked it, this is what I did in my first paying job in summer break of my school.
I am not sure we have something equivalent here - more so when Angular is so rapidly changing and keeps breaking every other month.

However, it is rather easy to write a "generic component" that can be simply configured using a JSON and you can have a new form without writing any of the usual code. This has two main advantages: 1) You have a single place to fix when Angular changes something in its structure 2) You have a super reusable form engine which can be configured on the fly, allowing you to do cool things like storing form info in a backend service, and automatically update the Angular 2 app on the fly.

To start off, you will need to define a JSON which can be used to construct the UI on the fly.

this.componentJSON = {};
this.componentJSON['title'] = "Trials";

this.componentJSON['formItems'] = [
 { "type": "text", "id": "trial", "name": "Trial Name", "description": "Name of trial", "theValue": "", "param_name": "trial_name" },
 { "type": "text", "id": "sponsorer", "name": "Sponsorer", "description": "Sponsorer of the trial", "theValue": "", "param_name": "sponsorer_name" },
 { "type": "button", "id": "submit", "name": "Submit", "description": "Submit the form", "api_call": this.createTrial },
];


The above JSON is intended to create a simple form with a title, two text fields and a button. The api_call parameter of the button is a Service object used to make an API call.  This JSON should be used in any parent component that intends to use this component, typically being initialised in ngOnInit() method.

Next we define generic.component.ts and generic.component.html as follows

import { Component, OnInit, Input } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { CommonModule } from '@angular/common';
import { BrowserModule } from '@angular/platform-browser';

@Component({
   moduleId: module.id,
   selector: 'generic-cmp',
   templateUrl: 'generic.component.html'
})
export class GenericComponent implements OnInit {

   @Input()
   componentJSON: any;

   constructor() { }

   ngOnInit() {
   }

   callAPI(item: any) {
     item.api_call.api_call(this.processInput(this.componentJSON)).subscribe((res:any) => {      
if (res.status == 0) {
        alert(res.result.message);  
} else {
        alert(res.error.error_message);  
}
     });
   }

   processInput(componentJSON: any) {
     var formItems = componentJSON['formItems'];

     var params: any;
     params = {};
     for(var frmItm in formItems) {
if (formItems[frmItm].type != 'button') {
        params[formItems[frmItm].param_name] = formItems[frmItm].theValue;
}
     }

     return params;
   }
}













The trick as always is to generalise the JSON and then the generic component generator above to take care of different input forms as well as adding validations. The callAPI function above for instance, basically generalises an API call, where as the processInput method creates the parameter payload for API call from the JSON we created earlier. Advantage again being that simply changing the JSON pretty much re-creates the whole of the HTML. Creating a different form just requires one to define a new JSON.

Since this component needs to be used in multiple places it would be wise to declare the directives associated with this component in a shared.module.ts file:

import { NgModule, ModuleWithProviders } from '@angular/core';
import { CommonModule } from '@angular/common';
import { FormsModule } from '@angular/forms';
import { RouterModule } from '@angular/router';
import { BrowserModule } from '@angular/platform-browser';

import { NameListService } from './name-list/index';
import { GenericComponent } from './generic-component/generic.component';

@NgModule({
    imports: [CommonModule, RouterModule, FormsModule],
    declarations: [GenericComponent],
    exports: [CommonModule, FormsModule, RouterModule, BrowserModule, GenericComponent],    
})

export class SharedModule {
    static forRoot(): ModuleWithProviders {
        return {
            ngModule: SharedModule,
            providers: [NameListService]
        };
    }
}


Now a third component can embed this reusable, configurable component as using:



Thats it! You should have a fairly reusable forms module, that you can fully configure using the JSON, without the need to keep writing the HTML and related Component code every time. 

A similar pattern may also be used for creating a generic service call. This is again useful as you code can remain independent of the changes that may occur in basic underlying syntax of say actually calling the HTTP post method. 

Wednesday, January 04, 2017

My tech wish list - 2017

1. MacBook with inbuilt cellular connectivity
2. iPhone with no ports - not even lightning - with wireless charging. For a truly courageous furture.
3. If I can just use my phone for every thing - where is that elusive Surface Phone ?
4. On demand apps on iOS - so that I can save on my precious local storage
5. Driver less cars - where are they ?
6. Super personalised medicines
7. AI assistant for myself - without a cloud connected device
8. Battery that lasts for 1 month on 15 min charge and doesn't explode.