Web APIs Explained By Selling Goods From Your Farm

CodeAnalogies Blog

If you have been to a farmer’s market or farm stand, then you can understand the concept of an application-programming interface (API).

If you are new to web development, you probably hear the term “API” a lot.

“I can’t wait until that company releases their public API!”

“That company’s API is a confusing mess.”

“Do they have an endpoint for that data in their API?”

Understanding the concept of an application-programmer interface (API) can be pretty difficult if you are not familiar with concepts like SOAP, HTTP and XML.

So, I wanted to find a way to explain the way that web APIs work as a whole so that when you get into the nitty-gritty technical details, you will understand how it all fits together.

In this tutorial, you are the owner of a farm that sells 5 products- chicken, pork, eggs tomatoes and corn.


In order to understand this…

View original post 1,873 more words


#APIs in the Travel Industry

What is an API?

This is one of the issues we have been dealing with, since the first day of this blog. I started uploading articles and opinions on that, as well as tutorials. Dealing with this question again doesn’t feel like repeating myself, but more like I want to revisit it with a fresh approach.

Most of the web services that are being used these days have an API. As much as this word has become common in the field of information technology a lot of people with no technical background have little to no knowledge of what an API actually is. API stands for Application Program Interface and it is a set or collection of standards, routines, and protocols which help in accessing any software application that is web-based. APIs basically govern the communication between two applications or software. Explained simply, APIs can be considered as doors or windows that allow different data messages to go in or out of different web-based services or applications. The rules of these APIs allow them to control the flow of information and communication is completely governed and controlled. API doors will only be open to the software or applications that have the keys to the lock on the door. One thing that must be clarified over here is that an API does not interact with the user in any way. It is just an application to application interaction tool. The user is neither aware of this communication nor can they intervene with it.

APIs in Online Travel Industry:

The online travel agency in the world is a giant and complex web of several channels, networks, and applications working together simultaneously. The communication between these channels and applications is of paramount importance and the entire online travel industry depends on it. The main reason why any traveling company develops an API is that the application of the company is able to interact with other travel companies and suppliers such as hotels and airlines. Both applications are able to send and receive dynamic information to each other and create a system of communication between them. The information that is usually transferred via APIs are the prices and availability of different travel related product such as airline tickets and availability of rooms. Different suppliers of products in the field of traveling such as hotels and airlines also use API to make sure that their products are being viewed by a large audience.

The importance of APIs in the online traveling industry can be understood by its widespread usage. The whole industry is based on a complex and huge system of APIs that connect different applications and websites and make sure that correct information is always available for the users. Different entities in the traveling industry such as distribution system, traction hubs, reservation systems, merchandising platforms etc. are all connected with each other in a systematic manner. The main aim of this interconnectivity is to make sure that the link between supply and demand is always in the most effective and efficient form. This system is very complex mainly because of the fact that the users on different traveling websites are always expecting fast and accurate results in terms of availability and price. Most of the traveling websites are now also supporting several other facilities for their users such as car rentals, parking spots, local event bookings, tours etc. This adds to the burden of the system and new APIs are to be defined to make sure that the application or system is up to date according to these additional factors as well.

If we look at the overall online traveling industry a lot of information is always being sent and received between various different software and websites. Most of this information is in the API call, no matter if the response is in XML, JSON or YAML. APIs make sure that the transfer of this information is done in the correct and most effective way to make sure that the system is running correctly and no wrong information is being conveyed to the users. Following are some of the major travel industry related APIs.



TripAdvisor proudly calls themselves as the “world’s largest travel site”. They have more than 200 million customer reviews and opinions about different travel-related products such as restaurants, hotels, and airlines. The website of the company is the main thing that has attracted so many customers towards it. The website also gives developers the information about the APIs that the company is using. The developers who are eager to build their own travel-related applications can use The TripAdvisor Content API for this purpose. One of the best things about their API is that it even provides the information about the destination of the data that is being transferred to it. TripAdvisor also offers other APIs that can be used for B2B connectivity. The company is constantly at work to farther improve their system and make sure that their customers are always satisfied when they leave the website.

XML Travelgate:

XML Travelgate is a company that performs XML integrations for various travel-related purposes. The main aim of the company is to make sure that those API services are provided to different travel-related programs based on the three basic principles of cost saving, high service level, and extensive product catalog. The company provides their customers with a market leading technology to make sure that they become successful in the field of the online travel industry. The experts that are working for XML Travelgate make sure that the clients they are working for are able to focus more on the actual business than on the technology that is being used to run it. With dozens of highly satisfied clients from all over the world, this is one of the best companies doing XML integrations in the field of the online travel industry.


Sabre is another big name in the online travel and tourism industry. With clients ranging from airlines to car rental companies and hotels to travel agencies, Sabre has the ability to provide with just the right type of traveling solution for your company. Sabre Dev Studio provides developers with a nice and organized platform on which they can easily design and develop any travel related website or web application. Sabre also provides a number of APIs to their clients and their APIs can be used to feature their different traveling related platforms in different new and already existing applications or websites.

Concluding Thoughts

We are reaching the end of our journey into the Travel API Industry review. It remains in this section to summarize some of the key takeaways from the article in order to reinforce the main concepts. In this article, I did not even begin to scratch the industry, and you can definitely find more in the awesome stack network by Kin Lane, who needs no introduction. The idea behind such articles is that we can discuss our thoughts and our incentives in the business even though we may are not experts in that particular domain.

It is obvious that the Travel Industry has greatly been benefited by the new technology of sharing data through APIs. It has scaled the business to a whole new level of experience for the end users. No matter if you are Booking.com or just a small house that gets rent for Airbnb. You are still part of the ecosystem and APIs is where your clients will look for before eventually reach you out.

Web API Design Part Three: Core Concepts

A nice and simple intro to the basic concepts of REST as they were originally introduced a decade ago. I truly like this approach since it doesn’t go into any implementation details.

How To Train Your Java

Episode 88

Two months ago, we started with motivations behind web APIs and looked at their design from UX point of view. The important conclusion was, that API and its ecosystem is to developers what GUI is to regular web applications users. A month ago, we looked from the scientific point of view at the properties of a modern web systems architectural style, REST, through the lenses of Roy Fielding’s famous Ph.D. dissertation.


Having those foundations, today we are going to get our hands dirty and talk about how to actually get the work done. Today we will talk about resources and representations, naming, relations, HTTP methods, collections, functions and sanity checks.


REST web API is built around exposing representations of resources being part of our system. The distinction is important: resource is some piece of data stored on our system or accessed ad hoc from somewhere else. It…

View original post 1,380 more words

Uniform Interfaces in a Nutshell

The uniform interface constraint defines the interface between clients and servers. It simplifies and decouples the architecture, which enables each part to evolve independently. The four guiding principles of the uniform interface are:


Individual resources are identified in requests using URIs as resource identifiers. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server does not send its database, but rather, some HTML, XML or JSON that represents some database records expressed, for instance, in Finnish and encoded in UTF-8, depending on the details of the request and the server implementation.

Manipulation of Resources Through Representations

When a client holds a representation of a resource, including any metadata attached, it has enough information to modify or delete the resource on the server, provided it has permission to do so.

Self-descriptive Messages

Each message includes enough information to describe how to process the message. For example, which parser to invoke may be specified by an Internet media type (previously known as a MIME type). Responses also explicitly indicate their cache-ability.

Hypermedia as the Engine of Application State (HATEOAS)

Clients deliver state via body contents, query-string parameters, request headers and the requested URI (the resource name). Services deliver the state to clients via body content, response codes, and response headers. This is technically referred-to as hypermedia (or hyperlinks within hypertext).

Aside from the description above, HATEOS also means that, where necessary, links are contained in the returned body (or headers) to supply the URI for retrieval of the object itself or related objects. We’ll talk about this in more detail later.

The uniform interface that any REST services must provide is fundamental to its design.


The dependency hell in software development



Most software projects depend on other software packages to provide its functionality. Actually, the bigger your software project, the more likely is that you end up having a large number of dependencies either internally between different parts of your code or externally with third-party libraries.

The goal of adding a new dependency to your code is to avoid reinventing the wheel. If there is code already available that does what you want, normally you would prefer to re-use it (there are exceptions, though) rather than investing your time in re-writing new code to solve the same task. On the other hand, once your software project has matured over time and is ready for production it may end up being a dependency for other software projects as well.

Grouping off-the-shelf functions into software packages and defining dependencies between them has been traditionally at the core of software development. It…

View original post 1,949 more words

Learning ~ HTTP/2 World

RT’s Knowledge World

What is HTTP?

  • Hyper Text Transfer Protocol
  • Protocol to transfer of data from a web server to web browser – in many forms/format to communicate with each other over the Internet.
  • Standard for transforming documents on World Wide Web (RFC 2616)

Different methods to transfer the data from client to server

methods**An idempotent HTTP method is an HTTP method that can be called many times without different outcomes. It would not matter if the method is called only once, or ten times over. The result should be the same. It essentially means that the result of a successfully performed request is independent of the number of times it is executed. For example, in arithmetic, adding zero to a number is idempotent operation.

Below is Example of Get method

headersBelow is complete HTTP request and response flow

HTTP_StepsClick to Get complete HTTP reference document  – Headers/Messages/Security/Status

We can transfer almost all…

View original post 421 more words

Is Serverless for you ?

Vijay's thoughts on all things big and small

One of the more recent architecture choices we can play with is the idea of serverless aka FaaS (Function as a Service). Thankfully, it is not hyped like say Machine Learning is. But nevertheless, it is widely misunderstood – often leading to bad design choices. I am just going to list a few questions I have been asked often (or I have asked fellow techies) , and give my point of view on those. I will try to keep this at a level where it makes sense for people who are not full time technologists .

pexels-photo-132037.jpegAre there really no servers?

To begin with the name serverless itself is quite misleading. Your code does not execute in vapor – it still needs servers. From a developer point of view – you choose a provider to handle a lot of things servers do (but not everything) , and you can focus…

View original post 971 more words

Advantages & Disadvantages of Django

I don’t agree with all the points of the analysis, but it is a really cool blog post. It gives a nice overview for anyone interested in developing their next website or API in Django. Nice work @mohamed!

Cogzidel Technologies

Advantages-and-Disadvantages-of-DjangoAre you a developer? Then it is evident for you to search for the first programming language to code. But to identify the best writing and the tools is a daunting task. Have you heard about Django? Did you know about the advantages and disadvantages of using Django? When you have this? Everything seems to fall into place very efficiently and quickly. Initially marked its journey as a framework for the Python language, with its right functionalities, Django very well reduced the complexities of a web app, giving it a more simplified model.

Python And Its familiarity:
It is well known to all of us that the Python is one of the most top-notch programming languages due to its ease of learning, flexibility, design making it one of most vague coding language. These are the reasons for making it one of the most popular choices.

  • Fast to write
  • Easy to…

View original post 999 more words

Tech Talk: How TBA Scales to Handle Competition Season Load

The Blue Alliance Blog

410+ K web page views. 350+ K API requests received*. 110+ K notifications sent. That’s how much load the TBA servers experienced on a typical Saturday during the competition season in 2017. Here’s a look at how The Blue Alliance is able to scale to meet demands while keeping running costs low. In short: TBA uses Google’s scalable web platform and a whole lot of caching.

* The number of API requests are higher than this, but due to caching, our servers only see and track a fraction of the number of requests made. More on this later.

pageviews_2016_2017 2016 vs. 2017 page views

Google App Engine

The main backend for The Blue Alliance runs on Google App Engine (GAE), a fully managed, highly scalable cloud platform. This gives developers the freedom to spend more time implementing features rather than managing servers and other infrastructure — very beneficial for a community-driven project…

View original post 1,030 more words

The Versions of the #Web

From the birth of commercial Internet to what it is today, it hasn’t been so long of a journey. Evolution was hampered and slow in the beginning but today, change is happening rapidly and at a fast pace. The future we were once discussing is no longer a concept, but close to being a practical reality. Let’s take a look at the journey of the web, the advancements in technologies that enable it and the evolutions of the web itself into what it is today and what it will be in the future.

The Tech Side of Things

In this section, I chose to discuss changes and improvements in HTTP and HTML over the years and how these changes affected the Internet we use today.


One of the most widely adopted application protocols on the Internet, the HTTP was designed in the early 90s. The first version, the unofficially labeled 0.9 was a very simple prototype built by Tim Berners Lee. The telnet friendly protocol consisted of a single GET method line with the path of the document and no headers or metadata.

HTTP 1.0

With the emergence and quick growth of consumer-oriented public internet infrastructure came the HTTP 1.0. Some of the key protocol changes from the prototype version were:

  • The request may consist of multiple newlines separated header fields.
  • The response object is prefixed with a response status line.
  • Response object has its own set of newline separated header fields.
  • The response object is not limited to hypertext.
  • The connection between server and client is closed after every request.

With HTTP 1.0, not just hypertext but the response object could be of any type. However, the hypertext part of the name of the protocol stayed. Almost every web server today can and will function in HTTP 1.0.

HTTP 1.1

The first official HTTP 1.1 standard was defined in 1997. It resolved a lot of protocol ambiguities found in earlier versions of the application protocol. It included optimizations that were performance critical, things like keepalive connections and transfer encodings. It allowed for an existing TCP to be used for multiple requests to the same host and deliver a much faster end user experience. To terminate the unending connection required the sending of an explicit close token to the server via the connection header.

HTTP 1.2

In the first major update since 1999 came the 1.2. It contained stronger and improved support for hierarchies and also provided better support for text menu interfaces. The menu interfaces helped HTTP be better suited for mobile clients. Systems supporting HTTP 1.2 consist of hierarchical hyperlink-able menus, the choice and titles of which are controlled by the administrator of the server.

HTTP 2.0

With the rise in devices and use of the Internet, HTTP 1.1 began to hamper performance and demands for an update increased that could decrease latency and keep up with the increasing needs. In 2015 therefore, came HTTP 2.0. It was standardized and supported by most major browsers by the end of the year. It made no changes to how existing applications work but provided new features to be taken advantage of for better speeds. It offers significant performance improvements and upgrades to speed.


Hypertext Markup Language is the markup language that enables the creation of web pages and web applications. Along with CSS and JavaScript, it is the foundation for the World Wide Web. The first two versions were of the language were very limiting, yet still, HTML 2.0 was the standard for website design until January 1997.

HTML 3.0

More people started to get into HTML, it was gaining popularity and people were demanding new features. Thus, around this time, Netscape, the leading browser in the market introduced proprietary tags and attributes into their browser to appease the cries of HTML authors. Being proprietary meant that a page using these tags looked bad on another browser.

HTML 3.0 was developed therefore with far greater capabilities and features. However, it failed as a result of browsers not being slow at incorporating all the features and thus abandoned most. In 1994, the W3C standardized the language to enable its development in the right direction. This first standardized version was toned down to contain fewer features, making its adoption easier. It came to be known as the version 3.2 and is supported by almost all browsers today.

HTML 4.0

HTML 4.0 was developed and designed to include the features that had been dropped in the move to the 3.2 version from the failed 3.0. It contained support for HTML’s new supporting presentational language, CSS. HTML 4.0 became the official standard in 1998 and was incorporated quickly by Microsoft into their latest browser. After revisions and corrections in the documentation, the final version came to be known as 4.0.1.


HTML 5 is the current and latest version of HTML. It contains new elements, attributes, and behaviors, as well as a large set of technologies that allow for the building of diverse and powerful and websites and web applications, that are also mobile friendly. Some of the offered technologies of HTML 5 include:

  • Semantics: allowing you to describe more precisely what your content is.
  • Connectivity: allowing you to communicate with the server in new and innovative ways.
  • Offline and storage: allowing web pages to store data on the client-side locally and operate offline more efficiently.
  • Multimedia: making video and audio first-class citizens in the Open Web.
  • 2D/3D graphics and effects: allowing a much more diverse range of presentation options.
  • Performance and integration: providing greater speed optimization and better usage of computer hardware.
  • Device access: allowing for the usage of various input and output devices.
  • Styling: letting authors write more sophisticated themes

The Less Tech Side of Things

On the less technical side of things and looking at evolution from the user’s end, the Internet has done more than evolved to show images and load web pages faster. It is no longer what it was ten years ago, and it won’t be what it is today ten, or even five years from now as well.

Web 1.0

Tim Berners Lee describes Web 1.0, the Internet before 1999, as the read-only version of the Internet. It was the version of the web that consisted entirely of web pages connected to each other through hyperlinks. The time, when there were only a lot of static dotcom websites that did not provide any form of interactive content. Web 1.0 was very different from the Internet that we’re used to today. The technology was developing back then; the Internet was in its first stage. There were millions of websites in which there was no active communication or information flow from the information reader to the information producer.

Web 2.0

The 1.0 was lacking in user interaction and this led to the development of Web 2.0. It can be called the read-write era of the web as it enabled information flow from the user end as well. Web 2.0 emphasizes on user-generated content, usability even by non-experts, and interoperability, meaning that websites can work equally well across multiple devices and platforms. It is also called the social web as it empowered the common user with blogs, social media, and video streaming. Any user can not only interact with content but generate their own content as well. Thus, with web 2.0, users are more involved with the information that is available to them. Popular and widespread developments of Web 2.0 are Facebook, Twitter, and YouTube, etc.

Web 3.0

The web 3.0 is the newest version of the web that you might not be fully aware of as it not as noticeable a change as from the version 1.0 to the version 2.0. The web 3.0, also known as semantic web, combines semantic markup and web services to enable content to be readable by machines. It provides context to information and develops interactions between machines and databases. A machine will search from one database to the next as they will be sharing information on a certain topic rather than being connected. It is still in development and improving every day. The web 3.0 learns our habit and preferences to provide only the most relevant and useful information. It also involves the emergence of 3D virtual and inter-spatial internet. The use of wearable devices to access places virtually through the Internet and much more.

Web 4.0

Although web 4.0 isn’t entirely here yet, it is no longer just a concept either. It will be the open, fully linked, and intelligent web, driven by the information collected through all the connected devices in our use. As a result, content will be more personalized and relevant than ever. An important part of web 4.0 is the Internet of Things. With your car, air conditioner, watch, mobile phone, work and home computer, and even the refrigerator connected and sharing information, the web will be more informed and more connected than ever for each individual. It will be like the always-on version of the Internet, tapped into our lives, learning, and responding. Constantly adding value to even the smallest of our tasks with relevant and useful information and services.