Bangladesh vs West Indies 2nd test: Day 1 (of a cricket fan)

…nonono, seriously, show me a Day 1 of a test match in recent past (or even distant past) where a side scored RR 4.24 at the stumps! You might be opening a new tab on your browser as you are reading it now and heading to Cricinfo statistics page just to prove me wrong (and you might prove me wrong) but establishing my argument as a fact is not my goal here anyway 😉 The naive point I am trying to make here is that, higher run rate has lately become a feature of Bangladesh test cricket! (Please don’t get started with test cricket is not about RR, it’s all about patience, consistency and all that blabla…lemme finish first!)

Now if you go to Cricinfo Statistics page and look at our recent test innings, you will see in fact we are scoring good totals these days and mostly with very healthy run rates! Suffice to say, every now and then we do have terrible (I mean seriously nasty, terrible) batting collapses, but then again compare to our early 2000’s history, we are not collapsing that often and yet our batting approach (too many scoring shots) has remained pretty same!

Let’s go back to our run rate thingy. Remember Australian test cricket when the show used to start with Hayden storm and end with Gilly thunders? We loved that! Australia made it quite usual to finish a day with 340/350. Don’t you miss those pacey test innings? Then watch Bangladesh test batting more regularly 😉

I mean look, I am not underestimating the beauty of sensible, relaxing test cricket batting with slower run rate , but come on!  End of the day who doesn’t love scoring shots! Even if there is a boundary from a thick-edge we still clap (and it happens too often to Bangladeshi batsmen) knowing how stupid and lucky the bastard batsman was!

“The end result” would be the biggest criticism against Bangladesh style test batting. But do I even need to say how many times we got “almost there” of a test win even with this seemingly nonsensical batting style!?

So, let’s just enjoy our apparently silly test batting instead of moaning over it! Of course it’s yet to bring a “REAL” test win for us, but trust me it will soon! I know bearing the pain of defeats like BD vs WI 1st test is more severe than getting ditched by your girl-friend but hey! Isn’t one “Abul Hasan’s (I am still looking for the right adjective to put here) 100” enough for believing in the future of Bangladesh test cricket? Btw, talking about Abul Hasan, please check out this Youtube video link below. Adios :)

Sunny’s warm reception to Abul Hasan after his debut century

Image credit: Cricinfo

Network effect of Semantic Web

In brief, Semantic Web is a network of distributed databases as oppose to existing web which is more of a network of distributed webpages. According to Robert M Metcalfe (also known as Bob Metcalfe),  the value of a network is proportional to the square of the number of nodes on the network. Mathematically it can be expressed as

V = n^2 (where, “V” is the value of the network and “n” is the number of nodes in that network)

Metcalfe’s law is also known as “Network Effect”. Even though initially Robert M Metcalfe formulated this law for Ethernet network however it has been implemented in other networking concepts as well that include web technology, social networking, business network and so on.

Can we can also apply Metcalfe’s network effect law on Semantic Web to estimate its value? Semantic Web is still just a vision, however we can consider Linked Open Data projects as miniature versions of Semantic Web. Therefore, by estimating the network effect of Open Data we can get an idea about the value of Semantic Web. By definition, Linked Open Data are obviously connected therefore we can consider Linked Open Data as a network of Open Data.

Now, as most of the Linked Open Data projects follow W3C Semantic Web recommendations and use RDF therefore, we can consider those individual data as “nodes” and the Semantic Web as a network. So, every time a new data or dataset added as Linked Open Data the value of Semantic Web is increasing according to Metcalfe’s law. Below is a visualisation of open Linked Data sources (we can consider them as “nodes” in Metcalfe’s law) of –

Figure: Some of the data sources of

In Web 2.0 literature, Metcalfe’s law is often used to emphasise the value of social networks. In Semantic Web, the value of the Linked Data network would be a lot more than web 2.0 (or social web) as in this case number of nodes (data or dataset) is much bigger.

If we take Reed’s law into consideration; for defining the value of Semantic Web we will get even a larger valuation than that we get using Metcalfe’s law. In Reed’s law, value of a network grows much faster than Metcalfe’s law. According to Reed’s law, the value and power of a network increases exponentially with the number of nodes in the  network;  expressed mathematically as 2 to the nth power (2n). That means –

V = 2n (where, “V” is the value of the network and “n” is the number of nodes in that network)

In Reed’s law, the number of possible sub-groups within a network also being taken into consideration. Again, social networking websites can be an ideal analogy to explain this growth. In social networks (e.g. Facebook) we can form “groups”. In Metcalfe’s law these the number of “groups” is not considered, only the number of “members” is considered. However, obviously the members can also form group within the social network and these groups can also add value to the network in much larger scale than the value created by a single individual member.

Mashup applications is a notable use of  Open Linked Data projects. In Semantic Web, we can take the different datasets from different sources and develop more mashups applications. Every time a new Linked Open Dataset is added in Semantic Web, it also adds a possibility of forming a new group with other dataset(s). Therefore, with inclusion of a new Linked Open Dataset in Semantic Web, the value of Semantic Web increases in a huge scale (2n) if we apply Reed’s law.

However, Reed’s law is not also out of any criticism. Obviously, all the Linked Data are not relevant to each other (even though technically they can be linked). Therefore, critics say Metcalfe’s law and Reed’s law overestimate the value of networks to some extent.

In one of my previous posts, I sort of divided all the Open Data projects into two categories – Linked Open Data projects (e.g. DBPedia, etc) and Non Linked Open Data projects (Pachube, DataGM etc). So according to our above discussion by applying Metcalfe’s law and Reed’s law we can say DBPedia and have more network value than Pachube and DataGM as the Open Data of first two websites are Linked Data. Now if we can transform the entire web into Semantic Web then the network effect of the web will be astronomical which will enable enormous number of new applications of the web.

Conceptual arguments on Semantic Web

Lately there has been a lot of discussios going on whether “Semantic Web” is going to be the main feature of Web 3.0 or its just an ambitious vision of Tim Berners-Lee. In this post, I am going to talk about the feasibility of Semantic Web from some conceptual perspectives that mainly includes Syllogism, AI and Godel’s incompletness theorem.

According to Shirky Clay “The Semantic Web is a machine for creating syllogism”. “Syllogism” is one of the most famous contributions of prominent Greek philosopher Aristotle in the study of logic.

A syllogism is a three-step argument with three assertions. The first two steps are called “premises” and the last assertion is called “conclusion”. Here is an example of syllogism –

1st assertion (premise): Lancaster University is in Lancaster.

2nd assertion (premise): Lancaster is in the UK.

3rd assertion (conclusion): Lancaster University is in the UK.

Even though Aristotle discussed about “Syllogism” in 300 BC however the concept of “Ontology” in Semantic Web has a direct relation with Syllogism. This becomes obvious from the example of Ontology Tim Berners-Lee mentioned in his seminal paper on Semantic Web (title: The Semantic Web)  – “If a city code is associated with a state code, and an address uses that city code, then that address has the associated state code”.

Image source:

Syllogism is an important element of Semantic Web. It helps to discover relations between resources that are true but not explicitly specified.  However, Shirky, Clay argues that, syllogism is not always useful. In real world most of the scenarios are far more complex than above mentioned Tim-Berners-Lee’s example. At times, conclusion in syllogism might be terribly wrong or syllogism might take form of Sorites paradox.

(Sorites paradox: The paradox of the heap of sand is an example of Sorites paradox. If there is N number of grains in a heap of sand and if we start removing grains one-by-one then at some point there will be only one grain left in the heap. As its impossible to decide at which point the left over sand cannot be considered as heap any longer, the last remained grain also should be considered as heap! )

Artificial Intelligence is a frequently discussed topic in Semantic Web literature. Understanding real-world context is an Artificial Intelligence (AI) problem. Even though Semantic Web concept is relatively new, however the researchers have been working on AI for quite a long time. But still AI is far away from being implemented in decision-making activities in real world scenarios. Regarding dealing with real world scenarios, Semantic Web took an opposite direction compare to that of AI. Instead of understanding complex real world scenarios, Semantic Web aims to describe the real world scenarios in less-complex way. However, as we have mentioned earlier, such simplification of real world scenario often shows terrible results. Below is an example of such consequence –

Statement 1: Mr X lives in England.

Statement 2: People live in England speaks in English.

Conclusion: Mr X speaks in English.

However, Mr X might be an immigrant who does not speak in English. There is no guaranty that based on N number of factors a system can always decide the real condition of a particular object or scenario.

Global Ontology for Semantic Web also requires agreed standards on the objects/resources. In real world, for number of objects/resources/things there is no universally agreed standards or definition. Furthermore, definitions and standards evolve or change. In small-scale it might be possible (e.g. DBPedia) to agree on common standards for describing objects however for world-wide-web-scale implementation probably this is too ambitious.

In fact, even the campaigners of Semantic Web admits the possibility of paradoxical situations in Semantic Web. According to Tim Berners-Lee, “Semantic Web researchers, in contrast, accept that paradoxes and unanswerable questions are a price that must be paid to achieve versatility”.

Godel’s incompletness theorem  can also go against the concept of Semantic Web. According to W3C’s web document “The Self-describing web” –

“RDF provides an interoperable means of publishing and linking self-describing Web data resources, and for integrating representations rendered using other technologies such as XML. The result is a single, global self-describing Semantic Web that integrates not only resources that are themselves built or represented using RDF, but also the other Web resources to which that RDF links, as well as those that can be mapped to RDF using technologies such as GRDDL” (MENDELSOHN, Noah, 2009)

However, according to Godel’s incompleteness theorem –

“…there would always be some propositions that couldn’t be proven either true or false using the rules and axioms of that mathematical branch itself. You might be able to prove every conceivable statement about numbers within a system by going outside the system in order to come up with new rules and axioms, but by doing so you’ll only create a larger system with its own unprovable statements. The implication is that all logical system of any complexity are, by definition, incomplete; each of them contains, at any given time, more true statements than it can possibly prove according to its own defining set of rules”. (JONES, Judy and WIILSON, William)      

The web has become a part and parcel of our lives. Probably, it is too late to do any new experiment with the fundamental structure of the web. One misinterpretation of semantic meaning might have serious consequence in our lives. The idea of self-describing Linked Data might be appropriate for certain Open Data campaigns but for full blown web-scale implementation a lot more research on the possible consequences should be done. The world may not be ready yet (not even in any near future) to see Pete and Lucy’s (Semantic Web) agents taking all the decisions on behalf of them as mentioned in Tim Berners-Lee’s  seminal paper on Semantic Web.

Why I am a die-hard fan of Steve Jobs but don’t like iStuff

As anticipated, Steve Jobs biography authored by Walter Isaacson has become a best seller in Amazon, in fact as of now it holds number one position at Amazon’s top 100 book list. After reading this book, I came into the conclusion that this book is a must-read for anyone interested in technology and arts or even for the ones who just love reading inspirational books.

Personally, I am not a fan of iStuff but a die-hard fan of Steve Jobs! I know it might sound a bit contradictory but so were most of the aspects of Steve Jobs’ life. Through out his life he sought for spiritual peace of mind in Zen mediation, Buddhism but also was ruthless to his colleagues and friends when necessary (and also when it was not necessary!). He had this binary world view that any given piece of work or thing is either “shit” or “amazing”! There was absolutely nothing in between these two extremes of the spectrum. He was the perfect example of a perfectionist.

Few days ago, after Steve Jobs died someone asked me “…everyone is talking about Steve Jobs, who was he anyway?” Well obviously I got slightly annoyed by the fact that s/he did not even know who Steve Jobs was but I also realised its a bit difficult to describe him in one title! I mean, yes he was the CEO of Apple but was he an engineer or a designer a businessman or a visionary? A lot of people may say he was all of those but to me that’s not really an answer. He was not a universal genius or polymath like Leonardo Da Vinci. I would define him more as a 21st century renaissance man. Even though he has a lot of patents however still we cannot really say he is the creator of any particular Apple product or Pixar work. In the book, there are several examples of how at times he claimed credit for the work that actually was done by someone else.

“Genius has side effects” – I think this is the best way to explain the seemingly odd sides of his character. I am a die-hard fan of him because his life story makes me to believe that I do not have to be a master of any particular domain to do great works! This belief is so important for me because both my undergraduate and postgraduate studies are so called “multi-disciplinary” degrees which eventually made me a perfect example of  “Jack of all trades, master of none” (at least as of now). But when I look at his biography, it reminds me that we can still do great works without being a super genius in a particular field. Over and over again Steve Jobs “made dent in the universe” by distorting the reality of genius people around him (in other words making them believe in impossibles and infinite human potential).

Now, why I don’t like iStuff? Technically speaking Steve Jobs made Apple product design “integrated” as opposed to product designs of other companies like Microsoft, Google which are “fragmented”. In other words its “Closed Technology Approach” vs “Open Technology Approach”. Critics say Steve Jobs was a “control-freak” and I think its true and this control-freakness is also reflected in Apple products. I am not going into detail discussion of Open vs Closed technology but all I want to say that, “Closed Technology (in this case “Apple”) is not for the 99%.” Yes I agree, Closed Technology offers arguably better design, security, service but if you go to a third world country and talk about why they should adopt Closed Technology its like asking people to buy expensive nutritious foods when they are dying of hunger! Electronic gadgets may malfunction which is quite acceptable. But its less acceptable when you buy an expensive product and Apple products are expensive! After the antenna-gate issue of iPhone 4, Steve Jobs admitted in his presentation “..we (Apple) are not perfect”. So, if you know you are not perfect and its impossible to be perfect then what about that struggling (financially) fresher who just have found his newly bought iPhone is not working and cannot get it fixed just by popping into any nearby electronic shop? I dont know about other countries but I used to work for a mobile phone company in Japan who used to sell iPhone (I believe they still do) along with their other line of products. If any customer had any trouble with any of those “other phones” we used to simply send it to our engineering department and get it fixed! But if it was an iPhone..oh God! We had to call Apple and get sandwitched  between the burst out of the customer and Apple’s “Terms and Conditions”! The bottom-line is that all those user friednliness, aesthetic value of Apple products might be very important but the consumers are just paying too much for these. I dealt with so many iPhone users who use iPhone case (so aesthetic value is mostly gone because its covered now) and/or cant afford mobile Internet usage cost (whats the point of having a smart phone then?) and/or not even interested in using smart phone features. They all were just victims of iHype!

Apple’s Apps Store (from a developer perspective) has showed us that Apple can maintain control and openness at the same time. I hope someday Apple will apply this sort of balanced approach to its all product and service designs. And regarding the price, I think at some point they will be forced to decrease the price to remain competitive against Android!

I read somewhere in Steve Jobs biography that someone sent an email to Steve Jobs criticizing Apple’s Closed Technology approach. Unlike most of the CEOs Steve Jobs at times used to personally reply some customer complaints and in that case he did. Steve Jobs defended his Closed Technology approach and they exchanged coupe of emails. In the last email Steve Jobs asked that person something like “…by the way, what have you created in your life or you just criticize others’ creations?” Oops! .

Does Open Data really need to be Linked (Data)?

So, what is the next wave after Web 2.0 (aka Social Web)? If we ask this question to the inventor of the web Tim Berners-Lee most probably the answer would be Semantic Web. The vision of Semantic Web is to transform the web into a distributed database system where all the data in the web will be interconnected and machine readable. Some Semantic Web evangelists are so optimistic about the future of Semantic Web that they are using the terms Semantic Web and Web 3.0 almost synonymously! According to Tim Berners-Lee’s vision of Semantic Web, the web should have the functionality of connecting all of its data to each other as Linked Data just like the hyperlinks of webpages in the present web architecture. Tim Berners-Lee is also a pioneer of Open Data movement. As Open Data are published in public domains, it offers the perfect playground for Semantic Web and Linked Data supporters.

In practice all the Open Data publishers do not necessarily care about publishing their data as Linked Data. However, Tim Berners-Lee led W3C recommends the Open Data and/or Government data publishers to put their data on the web as Linked Data (and it helps W3C in their progress towards Semantic Web). According to Tim Berners-Lee “The term Linked Data refers to a set of best practices for publishing and connecting structured data on the web”. This idea of Linked Data is also associated with Tim Berners-Lee’s vision of Semantic Web.

Image source:

Some Open Data publishers follow W3C recommendation and publish their Open Data as Linked Data. Therefore apparently the Open Data available in the web can be divided into two categories – Linked Open Data and Non Linked Open Data. So, the question is, does Open Data really need to be Linked? And, Who needs Semantic Web?

At this moment, perspectives towards the Open Data vary from organisation to organisation according to their organisational nature and motivation behind publishing Open Data. For instance, according to Local Government Improvement and Development (UK), “The idea behind Open Data is that information held by government should be freely available to use and re-mix by the public”. On the other hand, W3C emphasises more on the technical aspects  and standard of data publishing while discussing about Open Data. W3C’s approach towards Open Data is influenced by the Tim Berners-Lee’s Linked Data concept.

In technical terms, Semantic Web refers to the use of specific W3C Semantic Web open standards (RDF, OWL, SPARQL, GRDDL etc) that have been developed for data integration over the web and to make the web data machine readable. From an abstract point of view, the purpose of Semantic Web is to create a web wisdom that leads to a global knowledge society. There are a number of projects in the web that are following W3C Semantic Web open standards and developing interesting applications. DBPedia is one such project that extracts the structured data from the textual content of Wikipedia. DBPedia aims to convert Wikipedia content into structured data using Semantic Web technologies so that people can get easier access to Wikipedia’s knowledgebase and perform sophisticated queries. DBPedia’s Ontology is not limited into one single domain. It has RDF links with other Open Data knowledgebases like OpenCyc, WordNet, Freebase, UMBEL, MusicBrainz etc. The link-space created by RDF links between DBPedia and other Open Data websites can be considered as a miniature version of ideal Semantic Web where all the data are interlined across different domains and websites.

However, not necessarily all the Open Data publishers publish their data as Linked Data. In fact, Linked Data and semantic web have a lot of criticism as well and for various reasons not everyone supports the idea of publishing Open Data as Linked Data. For instance, Pachube does not publish their Open Data as Linked Data. As of now, Pachube uses Folksonomy (user generated tags) to categorise the type of data feeds published as Open Data in Pachube platform. Pachube is not using any automated machine generated and readable “Ontology” to classify the Open Data.

RDF is not the only option available for putting a machine created Semantic Layer on the data. Triple tag or Machine Tag is another tagging system used by Flickr and Delicious to add machine generated semantic information to their photos and bookmarks.

From the above discussion we can see that there are two methods available for knowledge representation and document indexing in the web – Ontology and Folksonomy. The difference between Folksonomy and Ontology is that Folksonomy is created and used by human whereas Ontology is created and used by machines. However, new concepts have been emerging that combine both Folksonomy and Ontology together rather than choosing one of them and dumping another.

Almost all the Open Data projects are using either Folksonomy or Ontology for their content/data classifications. As it was mentioned earlier, by combining both of the methods together into one standard we can make our web more structured. If we can decide on one standard then the similar Open Data projects can share their resources with each other just like DBPedia. Therefore, the answer of “Does Open Data really need to be linked data?” is probably YES as it makes Open Data more structured and searchable, but they all better follow only one standard technology (e.g. RDF or Machine Tag) for making their Open Data linked. However, transforming the entire web into Semantic Web is too challenging and its feasibility and necessity are also questionable.

Slider by webdesign