gRPC Practical Tutorial – Magic That Generates Code

This is a really nice article I wanted to share that helped me understand the gRPC protocol a little bit deeper. It also offers a nice hands-on experience for the active developer!

What is gRPC?

gRPC is a language-neutral, platform-neutral framework that allows you to write applications where independent services can work with each other as if they were native. If you write a REST API in, say, Python, you can have Java clients work with it as if it were a local package (with local objects).

This figure should clarify it even further.

As in many RPC systems, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. On the server side, the server implements this interface and runs a gRPC server to handle client calls. On the client side, the client has a stub that provides exactly the same methods as the server.

Source: http://www.grpc.io/docs/

Why is it ?

  • gRPC is supported in tons of languages. That means, you write your service in, say, Python, and get FREE!!! native support in 10 languages.
    • C++
    • Java
    • Go
    • Python
    • Ruby
    • Node.js
    • Android Java
    • C#
    • Objective-C
    • PHP
  • gRPC is based on the brand new shiny HTTP/2 standard which offers a bunch of cool stuff over HTTP/1. My favorite HTTP/2 feature is bidirectional streaming.
  • gRPC uses Protocol Buffers (protobufs) to define the service and messages. Protobuf is a thing for serializing structured data that Google made for the impatient world (meaning it’s fast and efficient).
  • As I mentioned it, gRPC allows bi-directional streaming out of the box. No more long polling and blocking HTTP calls. This is valuable for a lot of services (any real-time service for example).

Why it could fail.

  • gRPC is Alpha Software™. That means it comes with no guarantees. It can (and will) break, docs are not yet comprehensive, support could be lacking. Expect tears and blood if you use it in production.
  • It has no browser support, yet. (Pure JS implementation of protobufs is in alpha, so this point will likely be moot in a few months).
  • No word from other browser vendors on standardization (which is why Dart didn’t catch on).

What are we making?

I was thinking really hard what I should make that is simple enough for most people to follow, but also practical so you can actually use it in your projects.

I use Twitter a lot, and have worked on a lot of project using the Twitter API. Almost every project requires parsing the tweet text to extract tagged users, hashtags, URLs etc. I always use twitter-text-python library. I think it will be great to write a server wrapping this package in Python, and then generating stubs (native client libraries) in Python and Ruby.

All code is here: https://github.com/karan/grpc-practical-tutorial

What you need

  • protoc – install
  • grpc-python – pip install grpcio
  • grpc-ruby – gem install grpc

Installation of these should be easy.

parser.proto

The proto file is where we define our service, and the messages that compose the service. For this particular project, this is the proto file we are using.

I have commented out the file so it should be pretty straightforward.

// We're using proto3 syntax
syntax = "proto3";

package twittertext;

// This is the service for our API
service TwitterText {
  // This is where we define the methods in this service

  // We have a method called `Parse` which takes
  // parameter called `TweetRequest` and returns
  // the message `ParsedResponse`
  rpc Parse(TweetRequest) returns (ParsedResponse) {}
}

// The request message has the tweet text to be parsed
message TweetRequest {
  // The field `text` is of type `string`
  string text = 1;
}

// The request message has the tweet text to be parsed
message ParsedResponse {
  // `repeated` is used for a list
  repeated string users = 1;
  repeated string tags = 2;
  repeated string urls = 3;
}

Full proto3 syntax guide can be found here.

Generate gRPC code

Now comes the fun part. We are going to use gRPC to generate libraries for Python and Ruby.

# Python client
protoc  -I protos/ --python_out=. --grpc_out=. --plugin=protoc-gen-grpc=`which grpc_python_plugin` protos/parser.proto

# Ruby
protoc -I protos/ --ruby_out=lib --grpc_out=lib --plugin=protoc-gen-grpc=`which grpc_ruby_plugin` protos/parser.proto

What has happened is that based on the proto file we defined earlier, gRPC has made native libraries for us.

The first command will generate parser_pb2.py. The latter will generate lib/parser.rb and lib/parser_service.rb. All three files are small and easy to understand.

A python client can now just import parser_pb2 and start using the service as if it were a native package. Same for ruby.

Make server.py

I decided to make my server in Python, but I could have used Ruby as well.

import time

from ttp import ttp

// Bring in the package for our service
import parser_pb2

_ONE_DAY_IN_SECONDS = 60 * 60 * 24

// This is the parser from the third-party package,
// NOT from gRPC
p = ttp.Parser()

class Parser(parser_pb2.BetaTwitterTextServicer):
    def Parse(self, request, context):
        print 'Received message: %s' % request
        result = p.parse(request.text)
        return parser_pb2.ParsedResponse(users=result.users,
                                         tags=result.tags,
                                         urls=result.urls)

def serve():
    server = parser_pb2.beta_create_TwitterText_server(Parser())
    server.add_insecure_port('[::]:50051')
    server.start()
    try:
        while True:
            time.sleep(_ONE_DAY_IN_SECONDS)
    except KeyboardInterrupt:
        server.stop(0)

if __name__ == '__main__':
    serve()

At this point, it is helpful to have parser_pb2.py open. What we are doing in class Parser is implementing the interface parser_pb2.BetaTwitterTextServicer that was generated, and implementing the Parse method.

In Parse, we receive the request (which is a TweetRequest object), parse it using the third-party package, and respond with a parser_pb2.ParsedResponse object (structure defined in the proto file).

In serve(), we create our server, bind it to a port and start it. Simple. 🙂

To start the server, simply run python server.py.

Write the clients

client.py

from grpc.beta import implementations

import parser_pb2

_TIMEOUT_SECONDS = 10

text = ("@burnettedmond, you now support #IvoWertzel's tweet "
        "parser! https://github.com/edburnett/")

def run():
    channel = implementations.insecure_channel('localhost', 50051)
    stub = parser_pb2.beta_create_TwitterText_stub(channel)
    response = stub.Parse(parser_pb2.TweetRequest(text=text), _TIMEOUT_SECONDS)
    print 'Parser client received: %s' % response
    print 'response.users=%s' % response.users
    print 'response.tags=%s' % response.tags
    print 'response.urls=%s' % response.urls

if __name__ == '__main__':
    run()

The generated code also contains a helpful method for creating a client stub. We bind that to the same port as the server, and call our Parse method. Notice how we build the request object (parser_pb2.TweetRequest(text=text)) – it must be the same as defined in the proto file.

You can run this client using python client.py and see this output:

response.users=[u'burnettedmond']
response.tags=[u'IvoWertzel']
response.urls=[u'https://github.com/edburnett/']

client.rb

this_dir = File.expand_path(File.dirname(__FILE__))
lib_dir = File.join(this_dir, 'lib')
$LOAD_PATH.unshift(lib_dir) unless $LOAD_PATH.include?(lib_dir)

require 'grpc'
require 'parser_services'

def main
  stub = Twittertext::TwitterText::Stub.new('localhost:50051', :this_channel_is_insecure)
  response = stub.parse(Twittertext::TweetRequest.new(text: "@burnettedmond, you now support #IvoWertzel's tweet parser! https://github.com/edburnett/"))
  puts "#{response.inspect}"
  puts "response.users=#{response.users}"
  puts "response.tags=#{response.tags}"
  puts "response.urls=#{response.urls}"
end

main

Similarly, we build the client for Ruby, construct the Twittertext::TwitterText::Stub, pass in a Twittertext::TweetRequest, and receive a Twittertext::ParsedResponse back.

To run this client, use ruby client.py. You should expect the following output:

<Twittertext::ParsedResponse: users: ["burnettedmond"], tags: ["IvoWertzel"], urls: ["https://github.com/edburnett/"]>
response.users=["burnettedmond"]
response.tags=["IvoWertzel"]
response.urls=["https://github.com/edburnett/"]

Conclusion

Again, the full code is at https://github.com/karan/grpc-practical-tutorial.

You can keep building clients the same way for 10+ languages. Write once, use everywhere (almost). We haven’t even touched the sweet parts of gRPC, especially streaming, but if you look at this guide, they cover it well. I myself am just beginning to explore gRPC, but so far it seems promising. I can’t wait to see what you make with it.

Additional resources:

Advertisements

#BigData innovation through #CloudComputing:

Overview:

With the digitalization of almost everything in this world, the amount of data is increasing at an exponential rate. The IT experts soon realized that analysis of this data is not possible with the traditional data analysis tools. Considering this ever-expanding volume of useful data that could be used in a number of ways the IT experts came up with many solutions amongst which the two initiatives are amongst the top. These two are big data and cloud computing.

Big data analysis offers the promise of providing valuable insights of the data that can create competitive advantage, spark new innovations, and drive increased revenues. By carefully analyzing the data we can predict different things about the company. Cloud computing acts as a delivery model for IT services of any company and has the potential to enhance business agility and productivity while enabling greater efficiencies and reducing costs significantly. By storing the data on cloud servers instead of on site IT department you can not only save money but also make sure that your data is safe and secure as the security of these cloud servers is usually in the hands of top IT security companies.

Both technologies continue to thrive. Organizations are now moving beyond questions of what and how to store big data to addressing how to derive meaningful analytics that responds to real business needs. As cloud computing continues to mature, a growing number of enterprises are building efficient cloud environments, and cloud providers continue to expand services and service offerings.

Characteristics and Categories:

Databases for big data:

One of the most important and crucial task that any company has to do is to choose the correct data base for their big data. As the data is increasing more and more companies have emerged to provide data bases for this big. The databases that are designed to handle big data are usually referred to as NoSQL systems and they do not depend on SQL in contrast to the traditional SQL based data systems. The main working principle of all these companies is, however, the same that is to provide an efficient and effective storage to companies and give them ways to extract useful information from their big data. These companies truly help them to build and expand their business by giving them useful data analytics. The most reputed companies among hundreds of others are Cassandra, dynamob, and AWS. These companies not only give you the best data storage options they also make sure that your data is safe and secure and provide you with useful analytics about your data.

Machine Learning in the Cloud:

One of the most interesting features of cloud computing and big data analysis is the machine learning and its integration with AI. The machine learning cloud services make it easier to build sophisticated and large-scale models that can really increase the efficiency and enhance the overall data management of your company’s data. By injecting AI into your business, you can learn truly amazing things about the data analytics.

IoT platforms:

Internet of Things or IoT is also an interesting aspect of big data and cloud computing. Big data and IoT are essentially two sides of the same coin. Big data is more about data whereas IoT is more concerned with the flow of this data and connectivity of different data generating devices. IoT has created a big data flux that must be analyzed in order to get useful analytics from it.

Computation Engines:

Big data is not just about collecting and storing a large amount of data. This data is of no use to us until it gives us useful information and analytics. These computational engines provide excellent scalability to make your data storage more efficient. These engines use parallel and distributed algorithms to analyze the data. Map reduce is one of the best computations engines in the market at the moment.

Big Data on AWS:

Amazon’s AWS provides you one of the most complete and best big data platforms in the world. It provides you a wide variety of options and different services which can help you with your big data needs. With AWS, you get fast and flexible IT solutions and that too at a low cost. It has the ability to process and analyze any type of data regardless of the volume, velocity, and variety of data. The best thing about AWS is that it offers you more than 50 services and hundreds of features are added in these services every year constantly increasing the efficacy of the system. Two of the most famous services offered by AWS is redshift and kinesis.

AWS Redshift:

Amazon Redshift is a fast, efficient and fully managed data warehouse that makes it extremely simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. By allowing you to run complex analytic queries against petabytes of structured data and using sophisticated query optimization on high-performance local disks most results come back in seconds. It is also extremely cost efficient where you can start from as small as $0.25 per hour with no commitments and then gradually increase to petabytes of data for $1,000 per terabyte per year.

The service also includes Redshift Spectrum, which allows you to directly run SQL queries against exabytes of unstructured big data in Amazon S3. You don’t need to load or transform the data, and you can use open data formats which may include CSV, TSV, Parquet, Sequence, and RCFile. The best thing is that Redshift Spectrum automatically scales query and computes capacity based on the data being retrieved, so queries against Amazon S3 run fast and do not depend on data set size.

AWS Kinesis:

Amazon Kinesis Analytics is another great service by Amazon and is one of the easiest ways to process streaming data in real time with standard SQL. The best thing about this service is that you don’t have to learn any new programming languages or processing frameworks. This service allows you to query streaming data or build entire streaming applications using SQL. This makes sure that you can gain actionable insights and respond to your business and more importantly customer needs promptly.

Amazon Kinesis Analytics is a complete service that takes care of everything required to run your queries continuously and the best part is that it scales automatically to match the volume and throughput rate of your incoming data. With Amazon Kinesis Analytics, you only pay for the resources your queries consume which makes it extremely budget friendly and cost efficient. There is no minimum fee or setup cost.

 

Artificial Intelligence Offerings as an #API

Artificial intelligence or for short “AI”, is the use of intelligent machines that react and work like the human mind. This area of computer science is mainly concerned with speech recognition, processing, planning, learning, and problem-solving. On the surface, artificial intelligence may be linked to robotics, as it is mostly portrayed as such in Sci-fi movies, but the concept of artificial intelligence is much more complex than that. Artificial intelligence is now capable of much more than you think, it can provide you with reasoning, just like a human would, it can correct itself (self-correction) and it can learn and adapt, most programs are fixed when evaluated in terms of the duties they perform as their codes bound them to do so, artificial intelligence differs from the traditional methods in this department.

Picture1.pngThe use of Artificial Intelligence is very common, ranging from the top tech businesses to an average person just using his phone or laptop. The term originated in 1956. Today, it holds greater meaning than ever, it stretches from robotic processes such as automation to actual robotics itself! Artificial intelligence has all the abilities that a technical machine should, from speed to accuracy, all while being extremely human-like. AI can identify patterns, process data more efficiently than a human would, making it essential for businesses to have in order to progress.

As we have concluded, AI is a broad term and is not limited to a concise definition. Artificial intelligence holds greater depths even in one’s daily life, Siri a virtual assistant, that can perform a wide range of tasks, from looking up recipes to booking a flight. This type of Artificial Intelligence is working as an API to get the desired results you expect from it. API refers to Application Programming Interface, which acts as a channel between the user and the service provider. In its most basic terms consider the example of your virtual assistant Siri, on your command it acts as an API (application programming interface) to access a different database, such as calling an Uber to your doorstep.

APIs and Applications of AI as an API:

As previously stated, API stands for Application Programming Interface, that provides a platform for a set of routines and tools for building software applications, it also specifies how the different software can interact with one another. Cortana, an assistant made by Microsoft allows you to make reservations at a restaurant by acting as an API, this example highlights the use of a simple API and AI altogether.

Many other such interactions can be observed, Google Maps API permits developers to Picture2.pngembed the web page of google maps using either JavaScript or a Flash Interface. By using Siri to locate a road for you connects this process as a whole. Siri, being a form of artificial intelligence and acting as an application programming interface. Even Tesla, a self-driving car uses Google maps as a basic platform to use its self-driving capabilities.

 

 

Use of Artificial Intelligence as an API in Businesses:

Many firms have switched to using a superior functioning artificial intelligence system instead of using their old traditional information technology methods based operating systems.
Many tedious everyday tasks are now being performed by Artificial intelligence based operating systems, freeing up human resources that can better invest their time in projects that will be beneficial for the company. Many Customer Relationship Management systems are now using Artificial Intelligence by using machine learning algorithms to discover information on how to better communicate with customers, on calling the customer is immediately connected to an AI based operator, that deals with the concerns the customer has, more efficiently than any human operator would have.

Two-way communication with ChatBots, ChatBots are using artificial intelligence to engage the customer in a conversation, such pop-ups, ask the customer what they are concerned with and display the information that is only relevant for them. ChatBots ensure better two-way communication and help in promoting consumer loyalty. Companies rely on artificial intelligence to handle such matters, more efficiently and professionally than their human counterparts.

Picture3.png

The Dawn of Artificial Intelligence; from IBM to AWS:

International Business Machines or IBM: International business machines or IBM is a platform that previously provided hardware, but is now dealing in the software department, that deal with cognitive computing, a branch very similar to artificial intelligence. The research dates back to 1950, IBM provides server hardware, storage, software, cloud and many cognitive offerings.

IBM Watson is the ultimate offering of IBM for AI and Big Data with tones of applications. It was introduced several years ago, and since then has become on of the most powerful enterpirse APIs out there.

Amazon Web Service Artificial Intelligence or AWS AI: Amazon web service provides you with instances, to optimize your applications with the uses of the provided instance, either upgrade or to enhance performance. AWS enhances the performance of your drive and gives a variety of services targeted for enterprise AI usage. It is also amazing and we have to mention that amazon sustains a blog regarding AI.

Intrigued? Create your own Artificial Intelligence based Program with APIs:

Artificial Intelligence surrounds us, from Tesla’s self-driving cars to Siri on your iPhone, Artificial intelligence comes to play even when we are operating a system to get the smallest amount of output. Cortana, Siri, Tesla, Cogito or our favorite platform to watch movies and series, Netflix, are all examples of artificial intelligence and it’s easy to get influenced by them.

In this technological era, nothing seems to be impossible, one can create his own form of Artificial intelligence by using an Application Programming Interface to make your own custom software.

Using API.ai: A service that allows you to transform speech into text messages, it allows you to naturally process a language along with an Artificial intelligence system that will cater for your every need.

Step1: Login to their site and allow the program to access the basic data of your account. Accede to their terms and conditions and begin by creating your own artificial Intelligence based virtual assistant.

Step2: Authorize the access to basic information then customize your AI assistant by adding in some standard information, this information includes their Name, Description (what you intend your agent to be), language (the language your agent will be operating in) and the time zone.

Step3: The Test Console, allows you to test out the basic operations performed by your agent. It allows you to enter queries and how your agent will respond to them. Adding an additional small talk is based on your preferences, and you can do so by clicking the enable button.

Step4: Save the changes you have made and find your Artificial intelligence based assistants’ API.ai API keys. Feel free to make additional changes if you please then use JavaScript to connect to the api.ai.

Step5: Use HTML5 speech recognition to get on the right track, communicate with the api.ai and host your web interface lastly but not the least, say “hello” to your Artificial Intelligence using, the state of the arts Virtual Assistant!

Making HTTP Requests in JavaScript

I really liked this detailed guide to consuming APIs through Javascript. No matter if you are using ReactJS or Angular or any other frontend framework, this is basic knowledge that you have to deal with at some point. Rahul leads by example here. A must read in my opinion.

CAP'n Tech

The Introduction :

As you probably know very well by now, the internet is made up of a bunch of interconnected computers called servers. When you are surfing the web and navigating between web pages, our browser  requests information from any of these servers. The chart below explains explicitly on the request.

46

That is, our browser sends a request, waits for the server to respond to the request, and (once the server responds) processes the request. All of this is governed by protocols or rules which is a topic for another day.

Application Program Interface (API) 

Now the Wikipedia definition of the API will tell you, an Application Programming Interface (API) is a set of subroutine definitions, protocols, and tools for building 47.pngapplication software. But in layman terms , in the context of the web, the API’s generally allow you to send commands to programs running on the servers that…

View original post 640 more words

APIs for Authentication: A journey

Application Program Interface (API) key authentication is a technique that overcomes the hurdles of using shared credentials by using a unique key for each user. The key is usually in the form of a long series of letters and numbers that are different from the account login password. The owner provides the client with the key that helps the client access a website. When a client provides the said API key, the server allows the client to access data. The server has the power to limit administrative functions to any client for example in changing passwords, or deletion of accounts. API keys are sometimes used so that account passwords do not have to be given again and again. The APIs offer flexibility to limit control while also protecting user passwords.

API keys work a lot of different ways as they were conceived by multiple companies and they all have a different way of authentication. There are some API keys like Basic Auth that uses an established standard along with some strict rules. However, over time some familiar approaches are being used. These include putting the key in the Authorization header accompanying the username and password, another one just demands to add the key onto the URL. Sometimes keys are buried in the request body together with the data. Wherever the key is added the outcome is the same, the server provides access to the user.

There are different security protocols being used like OAuth1.0a, Basic API authentication with TLS and OAuth2.0. Basic Auth is the simplest because it only uses the standard framework or language library. Because it is the simplest hence, it offers the least security and provides no advanced options, you are simply providing a username and password that is Base64 encoded.

OAuth1.0a has, on the other hand, the most secure security protocol as it uses a cryptographic signature, combined with a token secret, none and other request based information. As the token is never directly passed across the wire so there is no possibility of anyone seeing a password in transit, this provides an edge to OAuth1.0a. On the other hand, this level of security comes with a lot of complexity. You have to use hashing algorithms with strict steps, but now this problem has been overcome as every other programming language can do it for you.

Repose is another API authentication platform that provides open source API validation, HTTP request logging, rate limiting and much more. It employs a RESTful middleware platform that is easily scalable and extensible. OAuth2.0 and Auth0 are both open sources API authenticators. Both have a completely different approach from OAuth1.0a. The encryptions are handled by TLS (previously called SSL) rather than using cryptographic algorithms. There are not that many OAuth2.0 libraries so this provides a disadvantage to users. OAuth2.0 is used by big names like Google and Twitter.

Auth0 is a platform that allows authentication of apps and supports just about all identity providers on any device or cloud. It uses a secure HTTPs API key to integrating with other tools giving it a seamless experience. It provides the clients with the ability to authenticate with credentials that they are comfortable with.

Many management platforms for API are available, each platform bringing something unique on the table. Kong is an API manager that offers a range of plugins to improve security, better authentication services and management of inbound and outbound traffic. Kong acts as a gateway between the client and the API, providing different layers of rate limiting, logging, and authentication.

3Scale is another manager that separates traffic control and management layers, as a result it produces superior and unsurpassed scalability. It integrates many gateway deployments with Amazon, Heroku, and Red HatOpenshift, which are free to use. Additionally, plugins can also be added to libraries built in several different languages and they design custom API management protocols for organizations as well. Microsoft Azure also provides a host of options for users so that little effort is done on the client’s part and most of the work is accomplished by managers. Azure uses a professional front end and developer portal that make it more user-friendly. It offers the greatest number of options for APIs and thus attracts more clients.

Del Boomi can be thought of as a cloud middleware, plumbing between applications that reside on cloud or premise. They can efficiently manage data for social networks and other uses. Boomi communicates with data across different or common domains, giving it an added advantage. MuleSoft is another API manager that makes use of Anypoint platform, thus it re-architects the SOA infrastructure covering legacy systems, proprietary platforms, and custom integration. This results in a strong and agile business solution for their clients.

AWS cognito is another management system offered by Amazon web services. They offer an adaptive multi-layer design that includes products which ensure availability and resilience. AWS cognito is built with security as its key feature. It can be easily deployed on any platform, using lock library or custom build implementation that can be chosen from more than 50 integrations. It enables clients to authorize users through an external identity provider that assigns temporary security credentials for users to access your website/app. It employs external identity providers that support OpenID, SAML, and the option to integrate your own identity provider.

Recently, API has found its applications in health-related fields. A vast majority of healthcare providers and other companies in the healthcare industry are making use of the web and mobile services. They provide vital information to patients and help them share information with other prescribers. Medical APIs will also help with the integration between partner providers, patient support services, insurance companies and government agencies. But are these API’s HIPPA compliant is a question many users have. Yes, there are many providers that meet the challenge of conforming to client demands while also ensuring the security of medical data.

Apigee Edge, another platform enhances digital value chain from the back end for customers who engage an app. It is HIPPA (Health Insurance Portability and Accountability Act) and PCI compliant. Apigee maintains management compliance by a number of features that include, encrypting and masking information, protecting traffic and managing and securing all data.

For healthcare providers, there are other API managers that provide HIPPA compliance like TrueVault. TrueVault acts as an interface between internal data and external applications. For instance, if a diagnostic laboratory wants to provide online viewing of test results, by making use of TrueVault they can allow approved third parties to access that information without the use of custom APIs or hooks. Hence, it provides a secure service that not only saves time but delivers information to the patients via mobile and tablet interfaces.

Still, there are many challenges that API managers face in making optimized solutions for the healthcare sector. Lack of access to effective tools required for testing and monitoring these interfaces are a serious obstacle for the developers. Furthermore, the developers lack insight and feedbacks in medical APIs which is a critical factor in developing elaborate and engaging APIs that will be widely adopted by the medical field.


Related Links:

  1. Apigee management compliance.

https://apigee.com/about/cp/api-management-compliance

  1. MuleSoft API manager

https://www.mulesoft.com/

  1. TrueVault Systems

https://www.truevault.com/healthcare-api.html

  1. Microsoft Azure

https://azure.microsoft.com/en-us/resources/videos/azure-api-management-overview/

  1. Del Boomi

https://marketing.boomi.com/API-Management-Demo-Success.html

  1. Kong API manager

https://getkong.org/

https://getkong.org/plugins/oauth2-authentication/

  1. 3Scale management

https://www.3scale.net/technical-overview/

  1. Akana API management solutions

https://www.akana.com/solutions/api-management

  1. Auth0

https://auth0.com/opensource

  1. Repose API manager

http://www.openrepose.org/

  1. OAuth2.0

https://oauth.net/2/

  1. OAuth1.0a

https://oauth.net/core/1.0a/

Mega Cities of Today and Tomorrow

I have been talking a little bit about Smart Cities since the very beginning of this blog.
Smart Cities for me are just a huge playground for APIs, Cloud Computing and more or less all the technologies that we utilize and promote nowadays.
Today there are so many huge, enormous, mega cities and the urgency for “smart solutions” grows bigger and bigger.
In the following visualizations, it is clearly depicted how those needs can be handled and why there are such needs in the first place.

BLACK BOX PARADOX

A bit of analytics on mega cities of today:

guides

People will continue moving into cities in the future:

View original post 20 more words

Scrape a Webpage using Python 2.7

I have been drafting a similar article for quite some time, but then my friend Konstantinos posted this. I just loved it..
The simplicity, the straight points he makes and obviously the hands-on tutorial. I just hope that you will enjoy it as much as I did!

My Data Mining

Similar to the previous post, in this post, we are going to learn how to extract information from the Internet. We have to create a dataset first, to implement data mining techniques. So, let’s start.

Github Code of this project.

1. What is scraping?

Scraping is a technique that allows us to extract information from the Internet. For example, scraping a web page means that we are going to extract the HTML from that page and then take the ‘useful’ information from the HTML. Useful information is the information that we need, for example, the infobox of a Wikipedia page or the meta tags of a web page, etc. For more information, you can check the definition of Web Scraping.

2. Scraping a Webpage

For this project we are going to need the following packages:

Like before, I am going to build the project as…

View original post 823 more words

Build Your Own Udemy

Today we all are living in technological driven world where online learning has become an important and totally worthwhile way of learning on-the-go. Now our future of higher education lies in the hand of the online learning system. Nowadays college and university students find themselves burdened with Jobs and family commitments and having an option of studying at their own time has become a critically important part of their life, as its very convenient and less expensive for most of the students moreover, You can work on any course just about anywhere you have computer access.

Because of the expanding trend of online learning platforms like Udemy, khan academy, now the question arises is that how can we make our own online learning platform, what are the core technologies involved in developing such systems, the answer to that is Application programming interface (APIs). APIs are sets of instructions or requirements that govern how one application can communicate with another.

The function of an API is usually fairly straightforward and simple, the process of choosing which type of API to build, understanding why that type of API is appropriate for your application, and then designing it to work effectively is the key to giving your API a fairly long life and making sure that it’s used by developers.

There are many types of APIs available. For example, you may have heard of Java APIs or interfaces within different classes that let objects interact with each other in the Java programming language. Along with program-centric APIs, we also have Web APIs like the Simple Object Access Protocol (SOAP), Remote Procedure Call (RPC), and the most popular at least in name, Representational State Transfer (REST).

There are more than one alternatives

If you’re looking for building your own platform for e-learning like Udemy, it’s important to decide which type of method you have in mind for the delivering lectures of courses that are offered, it can be audio, video or simple text. Video lectures are more in trend these days so now it’s important to know  how to make your own live streaming videos for course lectures, there are a lot of APIs that can offer to make an application that is user friendly and fast but for specific live video streaming Castasy is a good way of doing so as it’s a  cost efficient solution that has arrived in the form of a software  This new live streaming software comes with compatible versions for both iOS and Android devices and also comes in a desktop version. The software basically allows the user to have an application and website that could stream live videos with their own live streaming software. The user is capable of allowing access or denying access to any follower. Each video gets a separate URL and posting that specific URL in their browser, users can view the video at their desktops with the website version of that software. With different URLs users have the facility to view a number of videos available in the website version of the software The live streaming software also withholds a chat feature facilitating viewers to chat on videos as they are streamed so they can discuss relevant topics related to that video it’s a very good feature for e-academies as it helps the students to discuss different queries through chat.

Now if we talk about the most popular, known and very efficient API developer Citrix, Gotowebinar, and Udemy usually comes into the person’s mind now let’s look at them one by one and in detail.

What Citrix basically do is that these applications are streamed from a centralized specific location into an isolated environment where they are executed on different target devices. Application configuration, settings, and relevant files are copied to the client device. When you start the session virtualization, applications are delivered from hosting servers in the data center with the help of application streaming. The user is then connected to the server to which that specific application was delivered. The application is then executed on the server, and the server power is maximized. While the server receives mouse clicks and random keyboard strokes, it sends all the screen updates to the end user device.

GoToWebinar is a purpose-built for do-it-yourself Webinars, making it easy for multinational organizations to deliver their message to thousands of people at the same time, eliminating costly travel or expensive marketing promotions. Innovative in-session features and reports help businesses evaluate the success of their Webinars and to judge whether it was successful or not .it’s actually a Citrix production but it’s usually considered as a different API.

If we look at Udemy as an API we see that Depending on our intended use case, we may be interested in creating our own courses, basically our own platforms for e-learning, it helps us in developing that certain stage, we can consume premium courses, or develop our own through Udemy it’s an easy way to provide services online and earn a little bit of fortune.

API’s pricing benefits Availability
Gotowebinar For starters, it costs $89.00 and can provide services for up to 100 participants

For Pro it costs

$199.00/mo and can entertain up to

500 Participants

For plus it costs $429.00 and can provide services for

2000 Participants

·      Reliable

·      Ease of use

·      Cost efficient

·      Saves time and money that is otherwise consumed on marketing

Easily available in the US and outside of US
Citrix It ranges between 80$ to 500$ ·      standardized, common setup.

·      compress the data

·      it’ll encrypt the data for security

·      the performance is faster

·      centralized management

Easily available all around the globe
Udemy ·        list prices of Udemy range between $20 – $200.

·       Larger discounts are offered.

·       We can run promotions if different courses in 10 to 15$

 

 

  • The ability to create your own courses
  • The easiest opportunity to centralize your training materials
  • Easy Management of users and courses

 

Available all around the world

 

It is not as hard as you may think

Every API technologies have a lot of benefits and mostly are available all around the globe if we want to build our own e-learning platform it’s easier to utilize these APIs rather than developing our own, as its cost efficient and gives us all the desirable features whether it’s online streaming of lectures or publicity of a certain seminar they provide us with every feature necessary to develop our own Udemy .

Cloud Computing is every #Startup’s #CTO best friend

The needs of a startup:

Chief technology officers play a major role in managing the technical aspects of a company, especially for startups. The requirements of a company in the early stages differ considerably from its requirements in the later stage. For most startups, the initial period is turbulent; the market waters harsh and finding loyal partnerships cumbersome. For CTOs, this period can be exceedingly stressful, they have to manage and ensure the entire operation of the company runs smoothly at every point. As the world advances into digital zones, the burden on CTOs has increased. Initially, the company may hire a lot of IT professionals to take care of technology needs, however, as time goes on, these professionals would be cut down and some advance and take on more responsibility. The later stages of a startup are more secure and stable, by this point CTOs already have their strategy in motion, they have hired professions to handle technology work and their major role lies in super vision. However, during the middle region, CTOs can face numerous challenges. From finding the right balance in the company, managing resources, storing data, keeping the company wired, operational and connected to the market, can be a hurdle. However, diligent CTOs manage the company needs, keeping their eye on the end price.

The role of a CTO

Chief Technology officers are required to maintain the smooth functionality of technology, while reducing expenses of the company. Micro-level events are exceptionally useful for CTOs and they are always on the look at for changes that might occur at this level. For example, ways in which digital technology can be improved. Since data is the basic tool of most companies, CTOs often look for ways to improve high data throughput. The technology market and all its innovations are always under the radar of Chief technology officers. These people do not invest impulsively; rather make calculated decisions to ensure that every investment results in incremental growth and money savings for the company. CTOs look at market trends and environments, the evolutions that take place and the competition they face in the market. Moreover, these officers pay diligent attention to customer preferences and buying habits. These two aspects show the company how to market products so they become more appealing to customers. These customer needs are evaluated on a five year basis, as customer preferences change only slightly during this time frame. However, if certain technological advances make big waves in the market environment, then CTO’s are required to change their strategies accordingly.  While these are the basic requirements and credentials of CTO’s, hiring equally qualified tech experts also falls under their domain. CTO’s are also required to manage their team, and ensure every department and their technology needs are fulfilled, and run smoothly at all times.

 

front

What is Cloud Computing?

Cloud computing or internet based computing is on demand access to a number of configurable computer resources. These resources can include computer networks, data, storage, servers, applications and other services. The services can be dispatched with minimum management, and are normally safer, and more reliable for data needs. Cloud storage and computing give customers and companies the platform to safely store their data, privately and even remotely. In some cases outsourced companies may be involved in providing the services, however, other cloud based computing are very personalized.

Cloud computing and services can really reduce the cost of the technology infrastructure of a company. For startup companies, the costs are already high and initial revenue low, hence for such companies, cloud computing provides and easy, accessible and cheap option, as they do not need to buy separate servers. By taking care of the IT needs of organizations, it gives companies the leverage to focus on central issues and core business goals. Moreover, it allows CTOs to manage the technology needs faster, more professionally, and in a systematic manner. When such professionals have to take care of big data and services on a daily basis, they rarely find the time to focus on more important issues at hand, managing the technology resources. Moreover, since these servers are outsourced, maintenance costs are negligible for the company. In addition, it also reduces the personnel need of a company, and hence cuts costs considerably.

While cloud computing can offer a range of benefits to companies, there are some draw backs as well. Public cloud computing options are very risky, and in the past, there have been countless breaches that have resulted in loss of personal information from companies. This information can include sensitive credit card information, employee details or any company data. Usually hackers release such information on social media outlets, and this can cause the public image of a company to be in jeopardy. There have been numerous documented cases of theft and cyber hacking on public cloud computing, however, it is uncommon in Private cloud computing. None the less, the risks associated are very high, and due to the remote nature of the vice, the criminal can be very hard to track down.

Cloud Computing for CTOs: Design solutions in Cloud

Cloud Computing can offer a lot to companies, especially CTOs. Not only are there many cost saving benefits of employing such a service, but, most technology aspects of the company get assisted by the service. Cloud computing solutions are cheaper for companies, and by outsourcing data and IT needs, CTOs can focus on what truly matters, designing solutions to run the company seamlessly. The data becomes much easier to manage for officers, becomes more transparent and storage issues rarely arise.

Amazon’s CTO, Werner Vogel has already spoken about the benefits he has reaped from cloud computing in his company. Vogel advocated the services in a conference, stating, “the cloud has nothing to do with technology, the cloud is defined by all its benefits”.

While apps and gadgets can take care of data storage needs, for companies and startups the cost of download could be great, by investing in cloud services, this downtime can be prevented.  According to Vogel, if Cloud services lower their costs and make tackle privacy issues, companies would advance at an alarming rate.

 

 

 

ChatBots is just a fancy name for Search APIs

What are ChatBots?

ChatBots are smart computer programs that mimic the real life conversation people have, except by utilizing artificial intelligence on one end. From quasi-conversation to self-initiated tasks, these ChatBots can revolutionize the way you behave and connect on the internet. ChatBots can serve many functions, and with them, interacting and getting information from the internet can become easy and convenient. The ChatBots reply according to the number of responses fed into the system by the programmer and chooses the best option according to the scenario. The job of ChatBots seems rather straightforward; they are there to provide information to customers and users. Without this technology, you would have to go to a website and maneuver around till you found the thing you were looking for, order it and check out. With a chat bot, you can save yourself a lot of time by letting the artificial intelligence know exactly what you are after, and within minutes, you could find it. The chatting is similar to the one you would expect to have with a sales person. Facebook Messenger is one of the biggest companies investing in 30, 000 or more bots, believing these are vital to the business. This means that Facebook will be cutting their human employees which were required for the task these bots would have to undertake.

Why are they important to us now?

More recently ChatBots have become a necessary technology, with Facebook and Microsoft, among other great companies dedicating resources to this technology. While ChatBots seem like a novel technology, they are actually older than we think. There have been machines created to imitate human behavior for ages; however, the notion of integrating ChatBots within apps is a novel concept. New age bots, however are not looking to imitate humans completely, instead, they are what they are, and machines intended to interact with humans. These machines can only respond to limited commands, and if it does not have an answer for your question, it won’t be of use, no amount of brainstorming will give the bot a eureka moment. However, bots are adaptable, which means they constantly learn and get better with time. As they register interactions and responses, their range constantly increases.

chatbot-001

ChatBots can be used to make automated interactions,like booking, funnier

As more people are now moving from public social media sites to private networks like Facebook messenger, slack, weChat and Telegram for communication, the role of Chabot’s is increasing. For a business to reach more customers, interaction and communication are important, and if more people are now using messenger apps, then ChatBots make sense. According to a new survey, nearly 89% of customers, are looking forward to engaging with AI virtual assistants, hence the companies investing in these are making a smart move by the looks of it.

Technology around ChatBots:

As with the use in businesses, the technology around ChatBots is also evolving at a rapid pace. There are many online resources that let you built your own ChatBots. Facebook has recently launched its “bots for messenger” tool that allows developers to build the best bots they can, which can be bought be Facebook for a hefty price. The underlying technology of the tool uses artificial intelligence markup language and incorporates a tool that provides API hosting platforms.

There are many APIs that may be used for creating the perfect ChatBots, including cloud-based NLP APIs such as API.ai and WIT.ai. API.ai is a bot development program, that makes it convenient to integrate with certain social media outlets such as FB messenger, Skype, and Microsoft. Wit.ai is very similar, however, in comparison to api.ai, it is slightly more complicated to use. Moreover, api.ai has better user ability, is easier to use and the NLP engine is more mature.

ChatBots are quite the hype right now in the startup community, and developing scripts for ChatBots has also been improving over the years. River script is one of the simplest scripting languages for ChatBots utilized by many developers. The site is very user-friendly and the syntax is relatively easy to learn. There is no need for intricate XML structures, random symbols, and line noise. The interfaces are available for Java, JavaScript, Go and python.

Another important language for writing script for ChatBots is AIML( Artificial intelligence Markup Language). This is an XML-compliant language, and it is relatively easier to learn for a newbie. It also makes it possible to customize an Alicebot or make a bot from scratch.

Future of ChatBots

Advancement in Artificial intelligence in addition to the increase in messaging apps would increase the development of ChatBots for sure. As more businesses are investing in ChatBots now, there is a bot revolution that can be expected in the future. As ChatBots make their way into every department of the industry, it is easy to see how our lives will change. Currently, both Google and Facebook are working on making a master bot that will manage other bots, more like an organization. In the future, virtual assistants, such as Amazon Echo and Siri, will also have bot services and master bots that will allow every intelligent system to integrate seamlessly. Before this can implemented, however, plumbing needs to be built and writing script platforms need to improve. This can be seen happening and the futures of APIs seem bright. Artificial intelligence and ChatBots have a vital role to play in the future for businesses, this means that with more people interested in the technology, there will be more programmers who will be investing in building platforms for easy script writing. How the future plays out for ChatBots, only time will tell, however, by the looks of it, many large corporations are already on board. The three biggest tech giants, Facebook, Amazon, and Google, have been investing in ChatBots and the results will be tremendous. As more companies come around to the idea and understand how customer preferences are changing, the times for ChatBots seem to be looking up.