Just an(other) brief intro to the world of #BlockChain

Read More

Advertisements

#Data as a Service: REST #APIs Transforming the Cloud era

History of Cloud Computing

Cloud computing is a kind of Internet-based computing that offers pooled computer processing resources and data to computers and other devices on demand. It is often referred to as “the cloud” delivery of on-demand computer resources, everything from applications to data centers, through the Internet, normally, on a pay-for-use basis (Armbrust et al., 2010). The term “time sharing” is the foundation of cloud computing in the 1950s; back then mainframe computers were huge occupying plenty of room and due to the cost of purchasing and sustaining mainframes, organizations could not meet the expense of buying them for each user. The solution, therefore, was “time sharing” in which multiple users shared the entrance to data and CPU time. In 1969, J.C.R Licklider established the ARPANET (Advanced Research Projects Agency Network); his idea was for global interconnection and access to programs and data at any site from any place (Mohamed, 2009). This network became the basis of the internet.

In the 1970s, IBM launched an operating system known as VM which permitted admins to possess multiple virtual systems or “Virtual Machines on a single physical node (Mohamed, 2009). The VM operating system took the “time sharing” to the next level, and most of the primary function of virtualization software can be drawn to the VM operating system. The 1990s telecommunications companies began offering virtualized private network connections (Mohamed, 2009). It allowed more users to the same physical infrastructure through shared access. The change enabled traffic shift as necessary to enable better system balance and more mechanism over bandwidth usage. In the interim, PC-based system virtualisation began in solemn, and as the internet became more manageable, online virtualisation logically fell into place. Cloud computing came in around 1997. In 2002, however, Amazon created Amazon Wed Service (AWS) providing a cutting-edge system of cloud services from storage to computation (Mohamed, 2009). Amazon also introduced the Elastic Compute Cloud (EC2) as a commercial Wed service which allowed companies rent computers on which they were able to run their computer applications. 2009, Google and Microsoft joined, the Google App Engine brought low-cost computing and storage services, and Microsoft trailed suit with Windows Azure (Mohamed, 2009). The Reserve field service management software passages to the cloud.

cloud-computing.png

History of REST APIs

In understanding the history of REST APIs, APIs history comes first. Modern web APIs were legitimately congenital with Roy Fielding’s dissertation Architectural Styles and the design of Network-based Software Architectures in 2000 (Lane, 2012). Web APIs first seemed in the wild with the outline of Salesforce in February. Salesforce was an enterprise class web-based, sales force automation, as an “Internet as a service” with XML APIs were a fragment of Salesforce.com from the first day. On November the same year, eBay launched the eBay Application Program Interface (API) along with the eBay Developers Programs (Lane, 2012). Amazon started Amazon.com Wed Services which allowed developers incorporate Amazon.com content and structures in their websites. AWS also enabled third party sites search and display products from Amazon.com in an XML format. Amazon E-Commerce kicked off the modern Wed API movement (Lane, 2012).

Web APIs got traction when things got social. In 2004, Flickr launched their API which was attained by Yahoo later (Lane, 2012). The inauguration of RESTful API made Flickr become the imaging policy of choice for early blogging and social media movement. Users were allowed to entrench their Flickr photos easily into their blogs and social network streams. Facebook also launched their API Version 1.0 of Facebook Development Platform which enabled developers access Facebook friends, photos, events and profile info for Facebook (Lane, 2012). Twitter followed suit and introduced the Twitter API, and Google launched their Google Maps APIs. As the APIs were making social thrill across the internet, Amazon recognized the potential in RESTful APIs and launched a new web service Amazon S3 (Buyya, 2008). It delivered a simple interface that can be for storing and to reclaim any amount of data at anytime from anywhere on the internet. It offers developers access to vastly scalable, dependable, fast, and cheap data storage infrastructure, same as Amazon usages, to run its global network websites.

Necessity of REST APIs

REST is a set of principles that elaborate how Web standards like HTTP and URLs are supposed to be used. Its purpose is to convince performance scalability, simplicity, portability, visibility, modifiability, and reliability. It is just a series of guidelines and architect styles used for data transmission, and it is commonly applied to web applications. RESTful Application Programming Interfaces (APIs) are APIs that follow the REST architecture (Lozano, Galindo, & García, 2008). REST became necessary and important for minimizing the combination between clients and server mechanisms in a widespread application. In the case, a server is going to be used by various customers, and the developer has no control over, REST plays a part in managing the clients. REST is also necessary when one needs to update the server commonly without interfering in updating the customer’s software. Rest is in the world over; it is part of the web that makes it work so well

Recent Advancement in REST APIs

REST API for Atlassian application is among the recent advancement in REST APIs where Atlassian application exposes REST APIs for developers to use and access services of the Atlassian platform (Yates et al., 2014). These RESTAPIs provide an unconventional to the Java APIs utilized in the process plugins; they offer variation tolerance than in-process APIs. WordPress JSON also embraced WordPress JSON REST APIs in the future of the platform (WordPress, 2011). There is a separation between the client and server about in place and no need to be either inside WordPress front nor end admin panel for any requests to be read or executed (WordPress, 2011). The integration of the JSON REST API marks the ultimate evolution of WordPress from its humble backgrounds as a blogging solution into an entirely featured application platform, it is a lightweight data interchange format and based on a subset of JavaScript code language. The WP API allows one to take CRUD (Create, Read, Update, and Delete) actions to many various kinds of WordPress content posts, pages, media, comments, etc. The REST API gives languages instant access to the complete range of WordPress native functionality. The REST API also allows mobile developers to have the capability to treat WordPress installs like any other server. Another ability that WordPress gives is that use of the front-end of WordPress will be stringently optional (Katayama, Nakao, & Takagi, 2010). Additionally, they allow batch requests where one can make requests of multiple varied endpoints from the REST API in one HTTP request.

cloud-computing-elearning-path-cloud-r100

Open source projects are furthering software practices based on RESTful APIs. SmartBear Software launched an open source project under the governance of the Linux Foundation called the Open API Initiative (OAI) to establish standards and guidance for whole REST APIs are defined  (Katayama et al., 2010). The major goal of OAI is to describe a standard, language-agnostic interface to REST APIs that enables computers and users to discover and comprehend the abilities of the service without access to source code, documentation, or through network traffic check-up.

Future of REST APIs

RESTful APIs are regarded as superior to service-oriented architectures and cloud computing, and microservices are working to make RESTful API design the rule in the future. Daniel Bachhuber sees the REST API going further, his says:

I believe WordPress to be the embodied of core philosophies, then a particular manifestation of software: ownership over personal data, design for users, commitment to backward compatibility, and so on. The WP REST API is the foundational component of WordPress being embedded within 100% of the web” (Schiola, 2016).

The loT developers need REST without needless bloat, both HTTP and JSON. For JSON the future of loT is austere, and the REST model is a strong fit for loT. REST holds the future, since it allows building infrastructure for organizations with fewer worries about longer-term hitching to a particular client-side track, the server will always live longer than the client (Lanthaler & Gütl, 2012). Another key idea in this REST APIs architectural philosophy is that the server supports caching and is stateless.

 

 

References

Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., … & Zaharia, M. (2010). A view of cloud computing. Communications of the ACM53(4), 50-58.

Buyya, R., Yeo, C. S., & Venugopal, S. (2008, September). Market-oriented cloud computing: Vision, hype, and reality for delivering it services as computing utilities. Proceedings of 10th IEEE International Conference High Performance Computing and Communications, (pp. 5-13). Los Alamitos, CA: IEEE CS Press.

Katayama, T., Nakao, M., & Takagi, T. (2010). TogoWS: integrated SOAP and REST APIs for interoperable bioinformatics Web services. Nucleic Acids Research38(suppl 2), W706-W711.

Lane, K. (2012). History of APIs. API Evangelist. Retrieved from: http://apievangelist.com/2012/12/20/history-of-apis/

Lanthaler, M., & Gütl, C. (2012, April). On using JSON-LD to create evolvable RESTful services. Proceedings of the Third International Workshop on RESTful Design (pp. 25-32). Rio de Janeiro, Brazil: ACM Press.

Lozano, D., Galindo, L. A., & García, L. (2008, September). WIMS 2.0: Converging IMS and Web 2.0. designing REST APIs for the exposure of session-based IMS capabilities. In The Second International Conference on Next Generation Mobile Applications, Services, and Technologies, (pp. 18-24).

Mohamed, A. (2009). A history of cloud computing. Computer Weekly27. Retrieved from: http://www.computerweekly.com/feature/A-history-of-cloud-computing.

Schiola, E. (2016, January 20). The future of REST API: An interview with Daniel Bachhuber. Torque. Retrieved from: http://torquemag.io/2016/01/future-rest-api-interview-daniel-bachhuber/

WordPress. (2011). WordPress.org. Retrieved from: https://wordpress.org/

Yates, A., Beal, K., Keenan, S., McLaren, W., Pignatelli, M., Ritchie, G. R., … & Flicek, P. (2014). The Ensembl REST API: ensembl data for any language. Bioinformatics, 613.

A different Wikipedia for #Web #APIs

A couple of months ago I read an article by Kin Lane about efforts that try to create shareable API docs. Personally, I am a strong fan of creating your API definitions upfront, because it makes it easier to have an estimate of how it is going to look like for your client to integrate with your service, and also for your team to develop clients and tests. If you are interested on those matters, you can always have a look at a previous presentation I had made a couple of years ago.

This article is more focused though on one of the efforts mentioned by Kin. This is called APIs.guru and is an open source effort to create public documents on swagger from existing APIs( documenting public APIs in OpenAPI(fka Swagger) format). Starting from there, with existing tooling like the awesome API transformer you can generate RAML, WADL, API Blueprints and whatever they support or to plan. Another interesting alternative is it’s open-source analog api-spec-converter (when they will support more output formats).

 

The Project

It was not long before I starred the github project and Ivan (the mastermind behind the whole thing) contacted me and we started chatting about it.

What exactly is APIs.guru?

The overall project’s goal is to create a machine-readable Wikipedia for REST APIs with the following principals:

  • Open source, community driven project.
  • Only publicly available APIs (free or paid).
  • Anyone can add or change an API, not only API owners.
  • All data can be accessed through a REST API.

 

Building Around APIs.guru

Listed below are some of the core integration

Also used as test suite in following projects:

  • swagger-parser – Swagger 2.0 parser and validator for Node and browsers
  • SwaggerProvider – F# Type Provider for Swagger
  • ReDoc – Swagger-generated API Reference Documentation

Spreading the Word

Ivan has made it clear that he needs any help that anyone is willing to offer. Either

  • coding,
  • documentation,
  • dissemination
  • or just some feedback.

When you try to create a community project and build something around it, you should be open to talking to people and listen to what they say. Ivan has all the right mindset to run something like this.  It did not took long and I invited him to present at the API Athens Meetup

It was a nice event with tones and tones of API discussions and pizza as well!!

 

Join the Movement

join the movement

Become an API Guru!!

The Semantic Web: Is the reality finally catching up with the hype?

It’s hard to believe that just 25 years ago the idea of linking together databases so people could access information easily was limited to research universities and prescient sci-fi writers.

With the advent of the worldwide web and HTML, databases came to the masses in the form of pages. Pages could be made up of images, text, video and even links with web pages linking to other web pages. We no longer needed a librarian to retrieve facts nor the skillset of a researcher to tap into bodies of knowledge.

But with the advent of the broader Internet, information was liberated – and for many businesses and governments that leverage this data, the volume and complexity has spiralled out of control. Almost every business is a publisher and quickly good information was littered with bad information.

Authoritative websites might have excellent data quality – but struggled to organise information in an adequate way. All the progress of unleashing useful information morphed into a calamitous expanse of information of the world being dumped together. The reader continues to work to make sense and find their own way through the chaos.

But what if a search query yielded a link directly to the data itself – versus the page it resided on? With Google’s Knowledge Graph, you can ask for population and the answer appears. Google, Facebook with its Open Graph Protocol and the BBC are three organisations that are using linked data to have the computer deliver the exact data to users.

In order to make the machine “smarter” though, there needed to be a way for computers to understand how to find the data on a page within the metadata. This standard is called RDF (Reference Description Framework). As HTML presented a way to structure and relay information to the user, RDF provides information to the computer.

But databases only understand facts. In RDF, a fact is represented as a triple. A triple consists of a subject-object-predicate combination where the subject and the object both being entities – and the predicate describing the relationship. Using this simple model, information from any data source (spreadsheets, databases, XML documents, Web pages, RSS feeds, e-mail…) can be represented in a uniform way. Since all information is referenced via global URIs, any data source can refer to any other. Each triple easily can be combined with other facts and sources to form a web of information that you can assemble and reassemble.

So, if the machine knows that this picture is a map of England it links out to all the other “facts” about England – such as the capital city, its population, type of government, or the currency. Voilà. You have immediately created a logical piece of content made of facts all related to one another. And THIS is very powerful, very dynamic and very valuable.

There is a whole world of triples out there – free triple data sets that have been crowd-sourced or created by authoritative bodies. You can use this data – called public triples or Linked Open Data (LOD)– with your own private triples – if you have them.

The world of triples and linked data was coined as the Semantic Web – about a dozen years go by Tim Berners-Lee and as any good idea – the hype was high but the delivery left a lot to be desired. The Semantic Web was going to add context and intelligence to your data. But the leap from research lab into production was hard. Ironically, semantic linked data on its own lacked organisational context.

Public databases like the Open Government Data Initiative (OGDI) made available a wide body of public data (some in RDF form) but meshing it with internal data had been near impossible, as internal data was either in a relational database or in a document-centric database and triples needed their own triple store. To find the data in each database meant it needed its own index. It still required humans – or a tremendous amount of computing power and data modelling to put all the data together.

In more recent years, there has finally been a breakthrough with a combination database that allows all types of data, documents, values and triples – and their respective indices, to reside together. This combining of triples with other data types (public data and internal data, linked or not) to be combined in rich applications that can assist in risk management, decision support, knowledge management and reference data management.

This combination database allows ontologies (a domain or organisation’s framework of concepts and relationships that includes vocabularies, taxonomies, business rules) and data types to not only co-exist – but to inter-relate.

Let’s consider the amount of time spent rekeying information over and over again: a researcher wants to look up all the types of herbicides, next, look at a specific chemical within a herbicide, and a molecular compound within the chemical – and now see what other compounds it could be in. You could put all this into Excel, and begin cutting and pasting. Or you can download from linked open data sets – do a quick conversion from RDBMS to triple format – and quickly query to find the answer. One way is error prone, tedious and takes if not days, certainly many hours. The second way is a tremendous time saver – and accurate.

Or how about a group of researchers in a department are looking to know everything about drought-resistant crops for a given location. They can start to type in a search and get links with traditional tools. They then would make their own data map on how it all relates such as by key compounds that they need to further research and see how they fit together. With semantics, the researcher can access existing data maps and links.

This topic – drought – is related to many other topics in literature. These relationships represent an ontology – a organisation of topics that represent a way of thinking about this domain. So instead of doing searches, the researcher can interact with these ontologies and see what content and data is available and how it fits together. The researcher can then make new associations – and even share them – adding to the semantic web.

Not a researcher? How about a developer – constantly getting requests to provide reports from disparate systems? Instead of spending weeks and even months remodelling, what if you used RDF as the data model? By using RDBMS to RDF tools you can relatively easily migrate relational data to triples, speeding development by orders of magnitude.

If you have tremendous volumes of information that need to be updated as new information flows in, semantic dynamic publishing can be a tremendous solution. In airing the Olympics, the BBC had a dozen editors to develop, manage and update the pages for 10,000 athletes – their countries and their sports. Instead of having humans cull through the assets to lay out the pages, assets were dynamically added as certain conditions were met. The result? Seamless dynamic delivery with no additional headcount.

With the advent of technology that allows triples to be stored with other types of assets, industry applications are quickly evolving, including

  • Financial services companies are exploring the impact of semantic triples on maintaining reference data for compliance and governance
  • Intelligence agencies – looking to map social networks with intelligence reports
  • Media – using semantic web technologies to dynamically publish sites
  • Healthcare companies using semantics to simplify analytics around electronic patient records, prescription drugs and insurance data

With triples acting as the glue between documents and values, this webby way of linking things is putting the most contextually relevant facts on users’ fingertips – they way we envisioned it. The technology finally caught up with the aspirations.

 

APIs and Linked Data: A match made in Heaven

Due to the proliferation of available public sector data sources and initiatives, the interlinking and combination of such datasets has become a topic of major interest within SME information managers. While more agile options for data integration are being requested, conventional methods of data integration are not feasible for use due to the massive size of available data. The current state of the latter data is also mostly unstructured, thus making it either unaccessible for SMEs, or else making the cost of utilizing such data unbearable for SMEs. This calls for tools that support users in the re-use of such data, whilst hiding the underlying complexity and allowing the re-use of existing software applications.

A Quick Intro

A nice Linked Data primer can be found on this useful blog post or you can go quickly through my following presentation.

So, here is an extra attempt to explain the Linked Data Web, and I promise that I wont use any lingo in the following:

Imagine you’re in a huge building with several storeys, each with an incredible large amount of rooms. Each room has tons of things in it. It’s utterly dark in that building, all you can do is walk down a hallway till you bang into a door or a wall. All the rooms in the buildings are somehow connected but you don’t know how. Now, I tell you that in some rooms there is a treasure hidden and you’ve got one hour to find it.
Here comes the good news: you’re not left to your own resources. You have a jinn, let’s call him Goog, who will help you. Goog is able to take you instantaneously to any room once you tell him a magic word. Let’s imagine the treasure you’re after is a chocolate bar, and you tell Goog: “I want Twox”. Goog tells you now that there are 3579 rooms where there is something with “Twox” in there. So you start with the first room Goog suggests to you, and as a good jinn he of course takes you there immediately; you don’t need to walk there. Now you’re in the room you put everything you can grab into your rucksack and get back outside (remember, you can’t see anything, in there). Once you are outside the building again and can finally see what you’ve gathered you find out that what is in your rucksack is not really what you wanted. So, you have to get back into the building again and try the second room. Again, and again till you eventually find the Twox you want (and you are really hungry now, right?).
Now, imagine the same building but all the rooms and stairs are marked with fluorescent stripes in different colours, for example a hallway that leads you to some food is marked with a green stripe. Furthermore, the things in the rooms have also fluorescent markers in different shapes. For example, Twox chocolate bars are marked with green circles. And there is another jinn now as well- say hello to LinD. You ask LinD the same thing as Goog before: “I want Twox” and LinD asks you: do you mean Twox the chocolate bar or Twox the car? Well, thechocolate bar of course, you say and LinD tells you: I know about 23 rooms that contain Twox chocolate bars, I will get one for you in a moment.
How can LinD do this? Is LinD so much more clever than Goog?
Well, not really. LinD does not understand what a chocolate bar is, pretty much the same as Goog does not know. However, LinD knows how to use the fluorescent stripes and markers in the building, and can thus get you directly what you want.
You see. It’s the same building and the same things in there, but with a bit of a help in forms of markers we can find and gather things much quicker and with less disappointments involved.
In the Linked Data Web we mark the things and hallways in the building, enabling jinns such as LinD to help you to find and use your treasures. As quick and comfortable as possible and no matter where they are.

A Workshop on Linked Data

Almost a month ago I attended a workshop on Linked Data organised by the Linda Project. Even though I already have a lot of experience with various tools on the area I could not expect what I see. They have created a super simple to use toolkit that you can find by simply following this link. They offer all the code as open source so that you can fork whatever repo you like!!

Obviously we wouldn’t even be here discussing about this nice tool if they did not offer everything already packed with many nice APIs.

Other Linked Data APIs

I tend to update this section as a reference for companies that utilise Linked Data through APIs as their core business value.

OpenCorporates

OpenCorporates’ mission goal is to aggregate every bit of company-related public data and match it to the relevant company.

opencorporates

Fin

Today’s information economy calls for enterprises and SMEs that are capable to rapidly extract valuable information from various data sources and transform them into intelligence in order to gain (or retain) their competitive advantage, forecast future conditions and transform themselves into intelligent based and information rich entities that are going beyond their traditional business practises, by exploiting opportunities that arise to the surface during the process of information retrieval and digestion.

normalmap

In this context, information managers do hold a significant position in today’s enterprises, as they are responsible for the above-mentioned process. However, their daily activities are becoming more and more difficult and pressing, as the last decade an explosion in the delivery of public sector information data sources and initiatives has been recorded, following inconsistent patterns of publishing formats and data logic (semantics), creating a very fragmented field of work. The massive growth of available data has doomed conventional methods of data integration to fail while the complexity of processes within organisations ask for more agile options to link and mash-up data in a qualified way. Availability and matching of diverse data sources is today becoming more crucial and therefore the need for standards-based tools for the information management of SMEs is growing.

The Linked Data domain is promising to provide the answer to such problems, by creating the necessary infrastructure that will interlink these vast amounts of data.

 

 

 

 

Why would anyone need to evangelism about Semantic Web?

This (as the title may reveal) is about semantic web technologies and their usage, as well as the discussion around it.

The world has made such comet-like advance lately on science, we almost hope, that before we die we learn something about our infancy. Recent discoveries on cognitive science try to explore the way humans deal with information.

The key questions are (borrowed from a nice online reference ):

Perception

How do we account for selective attention to aspects of artworks? How do the perceptions of experienced viewers or listeners differ from those of less experienced ones? How does aesthetic perception build upon, or how is it related to, natural human perceptual abilities?

Imagination

What is the best explanation for imaginative experiences of the worlds created by artworks, especially those of complex narratives in films or novels? Studies of autistic children suggest that they lack certain powers to imagine the viewpoint of others. Sometimes this is put by saying that they lack a theory of the mind; but what does this mean? Some say that they lack a certain knowledge that (“theory theory”), while others argue rather that imagination involves some kind of basic, probably hard-wired, knowledge how– an ability to simulate the experiences of others (“simulation theory”). What is the best explanation of various kinds of images, such as mental images, visual images, etc.? How are perceptual and symbolic imagination distinct; how are they related?

Emotions

Do we experience authentic emotions or “pretend” (simulated) emotions in response to artworks? What is an “authentic emotion,” anyway? Can we experience genuine emotions (fear, horror, arousal, etc.) in response to illusions of, say, films? How are the emotions we feel in response to artworks “ecologically” based, i.e., how are they related to our environmental adaptation and species survival? What can neuroscience tell us about complex aesthetic emotions? What does cognitive science offer as an account of emotional expression IN works of art, such as impressionist music, German expressionist cinema, abstract expressionist painting, etc.?

Representation

Does representation occur through conventions or is it somehow more natural, a matter of the operation of certain psychological laws (perhaps even quite complex ones)? What is meant by “representation” in the context of discussions of mental processors, such as musical or visual processors? Here, representation seems to involve a specification or “cognitive mapping,” not symbolization, of a world, leading some to deny that so-called “internal representations” have any role in cognitive science.

Interpretation

What cognitive, perceptual, and other skills are used in interpreting works of art? How do basic perceptual skills enter in? Can cognitive science offer persuasive accounts of the way we learn to interpret difficult works such as avant-garde cinema, twelve-tone music, etc? What is the role of schemata that aid us in our perceptions, and how would we account for the development of such schemata (like, say, the master chess-player’s schemata employed in looking at a chess board)?

Language

Is all thought inherently linguistic or propositional in nature? Or does a modularity thesis of mind hold — are there forms of cognition unique to, say, music and visual art? Is language closely related or not to nonlingustic communication (stop and go lights, gestures, etc.)? Is it helpful and productive to treat music, painting, film, etc. as having languages of their own? How would such languages differ from ordinary languages? What is the relation of natural language to literary language (metaphor, lyrical poetic language, etc.)? One account of linguistics that has been extremely influential for art theorizing is that of Saussure; what would be the implications of replacing his theory with, say, that of Chomsky?

Narration

What is the best way to understand narration as it operates in different art forms such as the novel and cinema? How might narrative be involved in another temporal art form such as music? Are narratives part of human conscious experience generally? What would account for such narratives without an internal homunculus to serve as their narrator? And what is the relation then between “narrative truth” and “historical truth”? Can certain pathological states be understood as disrupted or abnormal narratives; how might such narratives play a role in the creation of innovative or avant-garde art? What is the relation between narration and explanation? What makes a narrative “true”?

Knowledge and the Ineffable

Do we have forms of experience in the arts that are ineffable? That is, do we have ineffable knowledge of, say, certain kinds of nuances in music that we cannot express in words? What would make such awareness count as real “knowledge”? What implications are there for current theories of consciousness like Dennett’s?

Having identified the variety of open issues on cognitive science issues, let’s move on the web. Since the current web aims to much more than sir Tim originally hoped for, we need to identify and talk about it.

Before we move forward, while I was writing an introduction about SW I found an already existing intro which I state below.

Semantic Web

The Semantic Web is a mesh of information linked up in such a way as to be easily processable by machines, on a global scale. You can think of it as being an efficient way of representing data on the World Wide Web, or as a globally linked database.

The Semantic Web was thought up by Tim Berners-Lee, inventor of the WWW, URIs, HTTP, and HTML. There is a dedicated team of people at the World Wide Web consortium (W3C) working to improve, extend and standardize the system, and many languages, publications, tools and so on have already been developed. However, Semantic Web technologies are still very much in their infancies, and although the future of the project in general appears to be bright, there seems to be little consensus about the likely direction and characteristics of the early Semantic Web.

What’s the rationale for such a system? Data that is geneally hidden away in HTML files is often useful in some contexts, but not in others. The problem with the majority of data on the Web that is in this form at the moment is that it is difficult to use on a large scale, because there is no global system for publishing data in such a way as it can be easily processed by anyone. For example, just think of information about local sports events, weather information, plane times, Major League Baseball statistics, and television guides… all of this information is presented by numerous sites, but all in HTML. The problem with that is that, is some contexts, it is difficult to use this data in the ways that one might want to do so.

So the Semantic Web can be seen as a huge engineering solution… but it is more than that. We will find that as it becomes easier to publish data in a repurposable form, so more people will want to pubish data, and there will be a knock-on or domino effect. We may find that a large number of Semantic Web applications can be used for a variety of different tasks, increasing the modularity of applications on the Web. But enough subjective reasoning… onto how this will be accomplished.

The Semantic Web is generally built on syntaxes which use URIs to represent data, usually in triples based structures: i.e. many triples of URI data that can be held in databases, or interchanged on the world Wide Web using a set of particular syntaxes developed especially for the task. These syntaxes are called “Resource Description Framework” syntaxes.


This is in general a huge area that I would try to explore with all its latest advancements. Related areas that will be of interest in this blog are going to be the Internet of Things, web services, APIs and anything that you find similar and would like to hear about.