Tech Talk: How TBA Scales to Handle Competition Season Load

The Blue Alliance Blog

410+ K web page views. 350+ K API requests received*. 110+ K notifications sent. That’s how much load the TBA servers experienced on a typical Saturday during the competition season in 2017. Here’s a look at how The Blue Alliance is able to scale to meet demands while keeping running costs low. In short: TBA uses Google’s scalable web platform and a whole lot of caching.

* The number of API requests are higher than this, but due to caching, our servers only see and track a fraction of the number of requests made. More on this later.

pageviews_2016_2017 2016 vs. 2017 page views

Google App Engine

The main backend for The Blue Alliance runs on Google App Engine (GAE), a fully managed, highly scalable cloud platform. This gives developers the freedom to spend more time implementing features rather than managing servers and other infrastructure — very beneficial for a community-driven project…

View original post 1,030 more words


The Versions of the #Web

From the birth of commercial Internet to what it is today, it hasn’t been so long of a journey. Evolution was hampered and slow in the beginning but today, change is happening rapidly and at a fast pace. The future we were once discussing is no longer a concept, but close to being a practical reality. Let’s take a look at the journey of the web, the advancements in technologies that enable it and the evolutions of the web itself into what it is today and what it will be in the future.

The Tech Side of Things

In this section, I chose to discuss changes and improvements in HTTP and HTML over the years and how these changes affected the Internet we use today.


One of the most widely adopted application protocols on the Internet, the HTTP was designed in the early 90s. The first version, the unofficially labeled 0.9 was a very simple prototype built by Tim Berners Lee. The telnet friendly protocol consisted of a single GET method line with the path of the document and no headers or metadata.

HTTP 1.0

With the emergence and quick growth of consumer-oriented public internet infrastructure came the HTTP 1.0. Some of the key protocol changes from the prototype version were:

  • The request may consist of multiple newlines separated header fields.
  • The response object is prefixed with a response status line.
  • Response object has its own set of newline separated header fields.
  • The response object is not limited to hypertext.
  • The connection between server and client is closed after every request.

With HTTP 1.0, not just hypertext but the response object could be of any type. However, the hypertext part of the name of the protocol stayed. Almost every web server today can and will function in HTTP 1.0.

HTTP 1.1

The first official HTTP 1.1 standard was defined in 1997. It resolved a lot of protocol ambiguities found in earlier versions of the application protocol. It included optimizations that were performance critical, things like keepalive connections and transfer encodings. It allowed for an existing TCP to be used for multiple requests to the same host and deliver a much faster end user experience. To terminate the unending connection required the sending of an explicit close token to the server via the connection header.

HTTP 1.2

In the first major update since 1999 came the 1.2. It contained stronger and improved support for hierarchies and also provided better support for text menu interfaces. The menu interfaces helped HTTP be better suited for mobile clients. Systems supporting HTTP 1.2 consist of hierarchical hyperlink-able menus, the choice and titles of which are controlled by the administrator of the server.

HTTP 2.0

With the rise in devices and use of the Internet, HTTP 1.1 began to hamper performance and demands for an update increased that could decrease latency and keep up with the increasing needs. In 2015 therefore, came HTTP 2.0. It was standardized and supported by most major browsers by the end of the year. It made no changes to how existing applications work but provided new features to be taken advantage of for better speeds. It offers significant performance improvements and upgrades to speed.


Hypertext Markup Language is the markup language that enables the creation of web pages and web applications. Along with CSS and JavaScript, it is the foundation for the World Wide Web. The first two versions were of the language were very limiting, yet still, HTML 2.0 was the standard for website design until January 1997.

HTML 3.0

More people started to get into HTML, it was gaining popularity and people were demanding new features. Thus, around this time, Netscape, the leading browser in the market introduced proprietary tags and attributes into their browser to appease the cries of HTML authors. Being proprietary meant that a page using these tags looked bad on another browser.

HTML 3.0 was developed therefore with far greater capabilities and features. However, it failed as a result of browsers not being slow at incorporating all the features and thus abandoned most. In 1994, the W3C standardized the language to enable its development in the right direction. This first standardized version was toned down to contain fewer features, making its adoption easier. It came to be known as the version 3.2 and is supported by almost all browsers today.

HTML 4.0

HTML 4.0 was developed and designed to include the features that had been dropped in the move to the 3.2 version from the failed 3.0. It contained support for HTML’s new supporting presentational language, CSS. HTML 4.0 became the official standard in 1998 and was incorporated quickly by Microsoft into their latest browser. After revisions and corrections in the documentation, the final version came to be known as 4.0.1.


HTML 5 is the current and latest version of HTML. It contains new elements, attributes, and behaviors, as well as a large set of technologies that allow for the building of diverse and powerful and websites and web applications, that are also mobile friendly. Some of the offered technologies of HTML 5 include:

  • Semantics: allowing you to describe more precisely what your content is.
  • Connectivity: allowing you to communicate with the server in new and innovative ways.
  • Offline and storage: allowing web pages to store data on the client-side locally and operate offline more efficiently.
  • Multimedia: making video and audio first-class citizens in the Open Web.
  • 2D/3D graphics and effects: allowing a much more diverse range of presentation options.
  • Performance and integration: providing greater speed optimization and better usage of computer hardware.
  • Device access: allowing for the usage of various input and output devices.
  • Styling: letting authors write more sophisticated themes

The Less Tech Side of Things

On the less technical side of things and looking at evolution from the user’s end, the Internet has done more than evolved to show images and load web pages faster. It is no longer what it was ten years ago, and it won’t be what it is today ten, or even five years from now as well.

Web 1.0

Tim Berners Lee describes Web 1.0, the Internet before 1999, as the read-only version of the Internet. It was the version of the web that consisted entirely of web pages connected to each other through hyperlinks. The time, when there were only a lot of static dotcom websites that did not provide any form of interactive content. Web 1.0 was very different from the Internet that we’re used to today. The technology was developing back then; the Internet was in its first stage. There were millions of websites in which there was no active communication or information flow from the information reader to the information producer.

Web 2.0

The 1.0 was lacking in user interaction and this led to the development of Web 2.0. It can be called the read-write era of the web as it enabled information flow from the user end as well. Web 2.0 emphasizes on user-generated content, usability even by non-experts, and interoperability, meaning that websites can work equally well across multiple devices and platforms. It is also called the social web as it empowered the common user with blogs, social media, and video streaming. Any user can not only interact with content but generate their own content as well. Thus, with web 2.0, users are more involved with the information that is available to them. Popular and widespread developments of Web 2.0 are Facebook, Twitter, and YouTube, etc.

Web 3.0

The web 3.0 is the newest version of the web that you might not be fully aware of as it not as noticeable a change as from the version 1.0 to the version 2.0. The web 3.0, also known as semantic web, combines semantic markup and web services to enable content to be readable by machines. It provides context to information and develops interactions between machines and databases. A machine will search from one database to the next as they will be sharing information on a certain topic rather than being connected. It is still in development and improving every day. The web 3.0 learns our habit and preferences to provide only the most relevant and useful information. It also involves the emergence of 3D virtual and inter-spatial internet. The use of wearable devices to access places virtually through the Internet and much more.

Web 4.0

Although web 4.0 isn’t entirely here yet, it is no longer just a concept either. It will be the open, fully linked, and intelligent web, driven by the information collected through all the connected devices in our use. As a result, content will be more personalized and relevant than ever. An important part of web 4.0 is the Internet of Things. With your car, air conditioner, watch, mobile phone, work and home computer, and even the refrigerator connected and sharing information, the web will be more informed and more connected than ever for each individual. It will be like the always-on version of the Internet, tapped into our lives, learning, and responding. Constantly adding value to even the smallest of our tasks with relevant and useful information and services.

gRPC Practical Tutorial – Magic That Generates Code

This is a really nice article I wanted to share that helped me understand the gRPC protocol a little bit deeper. It also offers a nice hands-on experience for the active developer!

What is gRPC?

gRPC is a language-neutral, platform-neutral framework that allows you to write applications where independent services can work with each other as if they were native. If you write a REST API in, say, Python, you can have Java clients work with it as if it were a local package (with local objects).

This figure should clarify it even further.

As in many RPC systems, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. On the server side, the server implements this interface and runs a gRPC server to handle client calls. On the client side, the client has a stub that provides exactly the same methods as the server.


Why is it ?

  • gRPC is supported in tons of languages. That means, you write your service in, say, Python, and get FREE!!! native support in 10 languages.
    • C++
    • Java
    • Go
    • Python
    • Ruby
    • Node.js
    • Android Java
    • C#
    • Objective-C
    • PHP
  • gRPC is based on the brand new shiny HTTP/2 standard which offers a bunch of cool stuff over HTTP/1. My favorite HTTP/2 feature is bidirectional streaming.
  • gRPC uses Protocol Buffers (protobufs) to define the service and messages. Protobuf is a thing for serializing structured data that Google made for the impatient world (meaning it’s fast and efficient).
  • As I mentioned it, gRPC allows bi-directional streaming out of the box. No more long polling and blocking HTTP calls. This is valuable for a lot of services (any real-time service for example).

Why it could fail.

  • gRPC is Alpha Software™. That means it comes with no guarantees. It can (and will) break, docs are not yet comprehensive, support could be lacking. Expect tears and blood if you use it in production.
  • It has no browser support, yet. (Pure JS implementation of protobufs is in alpha, so this point will likely be moot in a few months).
  • No word from other browser vendors on standardization (which is why Dart didn’t catch on).

What are we making?

I was thinking really hard what I should make that is simple enough for most people to follow, but also practical so you can actually use it in your projects.

I use Twitter a lot, and have worked on a lot of project using the Twitter API. Almost every project requires parsing the tweet text to extract tagged users, hashtags, URLs etc. I always use twitter-text-python library. I think it will be great to write a server wrapping this package in Python, and then generating stubs (native client libraries) in Python and Ruby.

All code is here:

What you need

  • protoc – install
  • grpc-python – pip install grpcio
  • grpc-ruby – gem install grpc

Installation of these should be easy.


The proto file is where we define our service, and the messages that compose the service. For this particular project, this is the proto file we are using.

I have commented out the file so it should be pretty straightforward.

// We're using proto3 syntax
syntax = "proto3";

package twittertext;

// This is the service for our API
service TwitterText {
  // This is where we define the methods in this service

  // We have a method called `Parse` which takes
  // parameter called `TweetRequest` and returns
  // the message `ParsedResponse`
  rpc Parse(TweetRequest) returns (ParsedResponse) {}

// The request message has the tweet text to be parsed
message TweetRequest {
  // The field `text` is of type `string`
  string text = 1;

// The request message has the tweet text to be parsed
message ParsedResponse {
  // `repeated` is used for a list
  repeated string users = 1;
  repeated string tags = 2;
  repeated string urls = 3;

Full proto3 syntax guide can be found here.

Generate gRPC code

Now comes the fun part. We are going to use gRPC to generate libraries for Python and Ruby.

# Python client
protoc  -I protos/ --python_out=. --grpc_out=. --plugin=protoc-gen-grpc=`which grpc_python_plugin` protos/parser.proto

# Ruby
protoc -I protos/ --ruby_out=lib --grpc_out=lib --plugin=protoc-gen-grpc=`which grpc_ruby_plugin` protos/parser.proto

What has happened is that based on the proto file we defined earlier, gRPC has made native libraries for us.

The first command will generate The latter will generate lib/parser.rb and lib/parser_service.rb. All three files are small and easy to understand.

A python client can now just import parser_pb2 and start using the service as if it were a native package. Same for ruby.


I decided to make my server in Python, but I could have used Ruby as well.

import time

from ttp import ttp

// Bring in the package for our service
import parser_pb2

_ONE_DAY_IN_SECONDS = 60 * 60 * 24

// This is the parser from the third-party package,
// NOT from gRPC
p = ttp.Parser()

class Parser(parser_pb2.BetaTwitterTextServicer):
    def Parse(self, request, context):
        print 'Received message: %s' % request
        result = p.parse(request.text)
        return parser_pb2.ParsedResponse(users=result.users,

def serve():
    server = parser_pb2.beta_create_TwitterText_server(Parser())
        while True:
    except KeyboardInterrupt:

if __name__ == '__main__':

At this point, it is helpful to have open. What we are doing in class Parser is implementing the interface parser_pb2.BetaTwitterTextServicer that was generated, and implementing the Parse method.

In Parse, we receive the request (which is a TweetRequest object), parse it using the third-party package, and respond with a parser_pb2.ParsedResponse object (structure defined in the proto file).

In serve(), we create our server, bind it to a port and start it. Simple. 🙂

To start the server, simply run python

Write the clients

from grpc.beta import implementations

import parser_pb2


text = ("@burnettedmond, you now support #IvoWertzel's tweet "

def run():
    channel = implementations.insecure_channel('localhost', 50051)
    stub = parser_pb2.beta_create_TwitterText_stub(channel)
    response = stub.Parse(parser_pb2.TweetRequest(text=text), _TIMEOUT_SECONDS)
    print 'Parser client received: %s' % response
    print 'response.users=%s' % response.users
    print 'response.tags=%s' % response.tags
    print 'response.urls=%s' % response.urls

if __name__ == '__main__':

The generated code also contains a helpful method for creating a client stub. We bind that to the same port as the server, and call our Parse method. Notice how we build the request object (parser_pb2.TweetRequest(text=text)) – it must be the same as defined in the proto file.

You can run this client using python and see this output:



this_dir = File.expand_path(File.dirname(__FILE__))
lib_dir = File.join(this_dir, 'lib')
$LOAD_PATH.unshift(lib_dir) unless $LOAD_PATH.include?(lib_dir)

require 'grpc'
require 'parser_services'

def main
  stub ='localhost:50051', :this_channel_is_insecure)
  response = stub.parse( "@burnettedmond, you now support #IvoWertzel's tweet parser!"))
  puts "#{response.inspect}"
  puts "response.users=#{response.users}"
  puts "response.tags=#{response.tags}"
  puts "response.urls=#{response.urls}"


Similarly, we build the client for Ruby, construct the Twittertext::TwitterText::Stub, pass in a Twittertext::TweetRequest, and receive a Twittertext::ParsedResponse back.

To run this client, use ruby You should expect the following output:

<Twittertext::ParsedResponse: users: ["burnettedmond"], tags: ["IvoWertzel"], urls: [""]>


Again, the full code is at

You can keep building clients the same way for 10+ languages. Write once, use everywhere (almost). We haven’t even touched the sweet parts of gRPC, especially streaming, but if you look at this guide, they cover it well. I myself am just beginning to explore gRPC, but so far it seems promising. I can’t wait to see what you make with it.

Additional resources:

The Reality of a Developer’s Life

I can’t say I am too proud of this post since it is mostly gifs that I found around the web and wanted to share with everyone.

In any case, I hope you will laugh a bit and if you come across any other related gifs, please let me know!

When you upload something to the production environment:

When you find a problem solution without searching in Google:

When you close your IDE without saving the code:

When you try to fix a bug at 3 AM:

When your regular expression returns what you expect:

When my boss reported me that the module I have been working will never be used:

When I show to my boss that I have fixed a bug:

When I upload a code without tests and it works as expected:

When marketing folks show to developers what they have sold:

The first time you apply a CSS to a web page:

When the sysadmin gives you root access:

When you run your script the first time after several hours working on it:

When you go on the weekend and everyone else is at the office trying to fix all issues:

When your boss finds someone to fix a critical bug:

When you receive an extra paid if the project ends before the deadline:

When something that had worked on Friday and on Monday did not work:

When you develop without specifications:

When the boss tells me that ‘tests are for those who don’t know how to code’:

When I show the boss that I have finally fixed this bug

A Developer's life in GIF

When my project manager enters the office

A Developer's life in GIF

#BigData innovation through #CloudComputing:


With the digitalization of almost everything in this world, the amount of data is increasing at an exponential rate. The IT experts soon realized that analysis of this data is not possible with the traditional data analysis tools. Considering this ever-expanding volume of useful data that could be used in a number of ways the IT experts came up with many solutions amongst which the two initiatives are amongst the top. These two are big data and cloud computing.

Big data analysis offers the promise of providing valuable insights of the data that can create competitive advantage, spark new innovations, and drive increased revenues. By carefully analyzing the data we can predict different things about the company. Cloud computing acts as a delivery model for IT services of any company and has the potential to enhance business agility and productivity while enabling greater efficiencies and reducing costs significantly. By storing the data on cloud servers instead of on site IT department you can not only save money but also make sure that your data is safe and secure as the security of these cloud servers is usually in the hands of top IT security companies.

Both technologies continue to thrive. Organizations are now moving beyond questions of what and how to store big data to addressing how to derive meaningful analytics that responds to real business needs. As cloud computing continues to mature, a growing number of enterprises are building efficient cloud environments, and cloud providers continue to expand services and service offerings.

Characteristics and Categories:

Databases for big data:

One of the most important and crucial task that any company has to do is to choose the correct data base for their big data. As the data is increasing more and more companies have emerged to provide data bases for this big. The databases that are designed to handle big data are usually referred to as NoSQL systems and they do not depend on SQL in contrast to the traditional SQL based data systems. The main working principle of all these companies is, however, the same that is to provide an efficient and effective storage to companies and give them ways to extract useful information from their big data. These companies truly help them to build and expand their business by giving them useful data analytics. The most reputed companies among hundreds of others are Cassandra, dynamob, and AWS. These companies not only give you the best data storage options they also make sure that your data is safe and secure and provide you with useful analytics about your data.

Machine Learning in the Cloud:

One of the most interesting features of cloud computing and big data analysis is the machine learning and its integration with AI. The machine learning cloud services make it easier to build sophisticated and large-scale models that can really increase the efficiency and enhance the overall data management of your company’s data. By injecting AI into your business, you can learn truly amazing things about the data analytics.

IoT platforms:

Internet of Things or IoT is also an interesting aspect of big data and cloud computing. Big data and IoT are essentially two sides of the same coin. Big data is more about data whereas IoT is more concerned with the flow of this data and connectivity of different data generating devices. IoT has created a big data flux that must be analyzed in order to get useful analytics from it.

Computation Engines:

Big data is not just about collecting and storing a large amount of data. This data is of no use to us until it gives us useful information and analytics. These computational engines provide excellent scalability to make your data storage more efficient. These engines use parallel and distributed algorithms to analyze the data. Map reduce is one of the best computations engines in the market at the moment.

Big Data on AWS:

Amazon’s AWS provides you one of the most complete and best big data platforms in the world. It provides you a wide variety of options and different services which can help you with your big data needs. With AWS, you get fast and flexible IT solutions and that too at a low cost. It has the ability to process and analyze any type of data regardless of the volume, velocity, and variety of data. The best thing about AWS is that it offers you more than 50 services and hundreds of features are added in these services every year constantly increasing the efficacy of the system. Two of the most famous services offered by AWS is redshift and kinesis.

AWS Redshift:

Amazon Redshift is a fast, efficient and fully managed data warehouse that makes it extremely simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. By allowing you to run complex analytic queries against petabytes of structured data and using sophisticated query optimization on high-performance local disks most results come back in seconds. It is also extremely cost efficient where you can start from as small as $0.25 per hour with no commitments and then gradually increase to petabytes of data for $1,000 per terabyte per year.

The service also includes Redshift Spectrum, which allows you to directly run SQL queries against exabytes of unstructured big data in Amazon S3. You don’t need to load or transform the data, and you can use open data formats which may include CSV, TSV, Parquet, Sequence, and RCFile. The best thing is that Redshift Spectrum automatically scales query and computes capacity based on the data being retrieved, so queries against Amazon S3 run fast and do not depend on data set size.

AWS Kinesis:

Amazon Kinesis Analytics is another great service by Amazon and is one of the easiest ways to process streaming data in real time with standard SQL. The best thing about this service is that you don’t have to learn any new programming languages or processing frameworks. This service allows you to query streaming data or build entire streaming applications using SQL. This makes sure that you can gain actionable insights and respond to your business and more importantly customer needs promptly.

Amazon Kinesis Analytics is a complete service that takes care of everything required to run your queries continuously and the best part is that it scales automatically to match the volume and throughput rate of your incoming data. With Amazon Kinesis Analytics, you only pay for the resources your queries consume which makes it extremely budget friendly and cost efficient. There is no minimum fee or setup cost.


Scaling #Python on Heroku: Deployment, part 1

It’s always good for a developer to have a couple of different deployment options under their belt. Why not try deploying your site to Heroku, as well as PythonAnywhere?

Heroku is also free for small applications that don’t have too many visitors, but it’s a bit more tricky to get deployed.

We will be following this tutorial:, but we pasted it here so it’s easier for you.

The requirements.txt file

If you didn’t create one before, we need to create a requirements.txt file to tell Heroku what Python packages need to be installed on our server.

But first, Heroku needs us to install a few new packages. Go to your console with virtualenv activated and type this:

(myvenv) $ pip install dj-database-url gunicorn whitenoise

After the installation is finished, go to the apilama directory and run this command:

(myvenv) $ pip freeze > requirements.txt

This will create a file called requirements.txt with a list of your installed packages (i.e. Python libraries that you are using, for example Django :)).

: pip freeze outputs a list of all the Python libraries installed in your virtualenv, and the> takes the output of pip freeze and puts it into a file. Try running pip freeze without the > requirements.txt to see what happens!

Open this file and add the following line at the bottom:


This line is needed for your application to work on Heroku.


Another thing Heroku wants is a Procfile. This tells Heroku which commands to run in order to start our website. Open up your code editor, create a file called Procfile in apilama directory and add this line:

web: gunicorn mysite.wsgi

This line means that we’re going to be deploying a web application, and we’ll do that by running the command gunicorn mysite.wsgi (gunicorn is a program that’s like a more powerful version of Django’s runserver command).

Then save it. Done!

The runtime.txt file

We also need to tell Heroku which Python version we want to use. This is done by creating aruntime.txt in the apilama directory using your editor’s “new file” command, and putting the following text (and nothing else!) inside:



Because it’s more restrictive than PythonAnywhere, Heroku wants to use different settings from the ones we use on our locally (on our computer). Heroku wants to use Postgres while we use SQLite for example. That’s why we need to create a separate file for settings that will only be available for our local environment.

Go ahead and create mysite/ file. It should contain your DATABASE setup from yourmysite/ file. Just like that:

import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))

    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),

DEBUG = True

Then just save it! 🙂


Another thing we need to do is modify our website’s file. Open mysite/ in your editor and add the following lines at the end of the file:

import dj_database_url
DATABASES['default'] = dj_database_url.config()



DEBUG = False

    from .local_settings import *
except ImportError:

It’ll do necessary configuration for Heroku and also it’ll import all of your local settings ifmysite/ exists.

Then save the file.


Open the mysite/ file and add these lines at the end:

from whitenoise.django import DjangoWhiteNoise
application = DjangoWhiteNoise(application)

All right!

Heroku account

You need to install your Heroku toolbelt which you can find here (you can skip the installation if you’ve already installed it during setup):

When running the Heroku toolbelt installation program on Windows make sure to choose “Custom Installation” when being asked which components to install. In the list of components that shows up after that please additionally check the checkbox in front of “Git and SSH”.

On Windows you also must run the following command to add Git and SSH to your command prompt’s PATH: setx PATH "%PATH%;C:\Program Files\Git\bin". Restart the command prompt program afterwards to enable the change.

After restarting your command prompt, don’t forget to go to your apilama folder again and activate your virtualenv! (Hint: Check the Django installation chapter)

Please also create a free Heroku account here:

Then authenticate your Heroku account on your computer by running this command:

$ heroku login

In case you don’t have an SSH key this command will automatically create one. SSH keys are required to push code to the Heroku.

Git commit

Heroku uses git for its deployments. Unlike PythonAnywhere, you can push to Heroku directly, without going via Github. But we need to tweak a couple of things first.

Open the file named .gitignore in your apilama directory and add to it. We want git to ignore local_settings, so it stays on our local computer and doesn’t end up on Heroku.


And we commit our changes

$ git status
$ git add -A .
$ git commit -m "additional files and changes for Heroku"

Pick an application name

We’ll be making your blog available on the Web at [your blog's name], so we need to choose a name that nobody else has taken. This name doesn’t need to be related to the Django blogapp or to mysite or anything we’ve created so far. The name can be anything you want, but Heroku is quite strict as to what characters you can use: you’re only allowed to use simple lowercase letters (no capital letters or accents), numbers, and dashes (-).

Once you’ve thought of a name (maybe something with your name or nickname in it), run this command, replacing apilamablog with your own application name:

$ heroku create apilamablog

: Remember to replace apilamablog with the name of your application on Heroku.

If you can’t think of a name, you can instead run

$ heroku create

and Heroku will pick an unused name for you (probably something like enigmatic-cove-2527).

If you ever feel like changing the name of your Heroku application, you can do so at any time with this command (replace the-new-name with the new name you want to use):

$ heroku apps:rename the-new-name

: Remember that after you change your application’s name, you’ll need to visit [the-new-name] to see your site.

Deploy to Heroku!

That was a lot of configuration and installing, right? But you only need to do that once! Now you can deploy!

When you ran heroku create, it automatically added the Heroku remote for our app to our repository. Now we can do a simple git push to deploy our application:

$ git push heroku master

: This will probably produce a lot of output the first time you run it, as Heroku compiles and installs psycopg. You’ll know it’s succeeded if you see something like deployed to Heroku near the end of the output.

Visit your application

You’ve deployed your code to Heroku, and specified the process types in a Procfile (we chose a webprocess type earlier). We can now tell Heroku to start this web process.

To do that, run the following command:

$ heroku ps:scale web=1

This tells Heroku to run just one instance of our web process. Since our blog application is quite simple, we don’t need too much power and so it’s fine to run just one process. It’s possible to ask Heroku to run more processes (by the way, Heroku calls these processes “Dynos” so don’t be surprised if you see this term) but it will no longer be free.

We can now visit the app in our browser with heroku open.

$ heroku open

: you will see an error page! We’ll talk about that in a minute.

This will open a url like in your browser, and at the moment you will probably see an error page.

The error you saw was because we when we deployed to Heroku, we created a new database and it’s empty. We need to run the migrate and createsuperuser commands, just like we did on PythonAnywhere. This time, they come via a special command-line on our own computer, heroku run:

$ heroku run python migrate

$ heroku run python createsuperuser

The command prompt will ask you to choose a username and a password again. These will be your login details on your live website’s admin page.

Refresh it in your browser, and there you go! You now know how to deploy to two different hosting platforms. Pick your favourite 🙂

Artificial Intelligence Offerings as an #API

Artificial intelligence or for short “AI”, is the use of intelligent machines that react and work like the human mind. This area of computer science is mainly concerned with speech recognition, processing, planning, learning, and problem-solving. On the surface, artificial intelligence may be linked to robotics, as it is mostly portrayed as such in Sci-fi movies, but the concept of artificial intelligence is much more complex than that. Artificial intelligence is now capable of much more than you think, it can provide you with reasoning, just like a human would, it can correct itself (self-correction) and it can learn and adapt, most programs are fixed when evaluated in terms of the duties they perform as their codes bound them to do so, artificial intelligence differs from the traditional methods in this department.

Picture1.pngThe use of Artificial Intelligence is very common, ranging from the top tech businesses to an average person just using his phone or laptop. The term originated in 1956. Today, it holds greater meaning than ever, it stretches from robotic processes such as automation to actual robotics itself! Artificial intelligence has all the abilities that a technical machine should, from speed to accuracy, all while being extremely human-like. AI can identify patterns, process data more efficiently than a human would, making it essential for businesses to have in order to progress.

As we have concluded, AI is a broad term and is not limited to a concise definition. Artificial intelligence holds greater depths even in one’s daily life, Siri a virtual assistant, that can perform a wide range of tasks, from looking up recipes to booking a flight. This type of Artificial Intelligence is working as an API to get the desired results you expect from it. API refers to Application Programming Interface, which acts as a channel between the user and the service provider. In its most basic terms consider the example of your virtual assistant Siri, on your command it acts as an API (application programming interface) to access a different database, such as calling an Uber to your doorstep.

APIs and Applications of AI as an API:

As previously stated, API stands for Application Programming Interface, that provides a platform for a set of routines and tools for building software applications, it also specifies how the different software can interact with one another. Cortana, an assistant made by Microsoft allows you to make reservations at a restaurant by acting as an API, this example highlights the use of a simple API and AI altogether.

Many other such interactions can be observed, Google Maps API permits developers to Picture2.pngembed the web page of google maps using either JavaScript or a Flash Interface. By using Siri to locate a road for you connects this process as a whole. Siri, being a form of artificial intelligence and acting as an application programming interface. Even Tesla, a self-driving car uses Google maps as a basic platform to use its self-driving capabilities.



Use of Artificial Intelligence as an API in Businesses:

Many firms have switched to using a superior functioning artificial intelligence system instead of using their old traditional information technology methods based operating systems.
Many tedious everyday tasks are now being performed by Artificial intelligence based operating systems, freeing up human resources that can better invest their time in projects that will be beneficial for the company. Many Customer Relationship Management systems are now using Artificial Intelligence by using machine learning algorithms to discover information on how to better communicate with customers, on calling the customer is immediately connected to an AI based operator, that deals with the concerns the customer has, more efficiently than any human operator would have.

Two-way communication with ChatBots, ChatBots are using artificial intelligence to engage the customer in a conversation, such pop-ups, ask the customer what they are concerned with and display the information that is only relevant for them. ChatBots ensure better two-way communication and help in promoting consumer loyalty. Companies rely on artificial intelligence to handle such matters, more efficiently and professionally than their human counterparts.


The Dawn of Artificial Intelligence; from IBM to AWS:

International Business Machines or IBM: International business machines or IBM is a platform that previously provided hardware, but is now dealing in the software department, that deal with cognitive computing, a branch very similar to artificial intelligence. The research dates back to 1950, IBM provides server hardware, storage, software, cloud and many cognitive offerings.

IBM Watson is the ultimate offering of IBM for AI and Big Data with tones of applications. It was introduced several years ago, and since then has become on of the most powerful enterpirse APIs out there.

Amazon Web Service Artificial Intelligence or AWS AI: Amazon web service provides you with instances, to optimize your applications with the uses of the provided instance, either upgrade or to enhance performance. AWS enhances the performance of your drive and gives a variety of services targeted for enterprise AI usage. It is also amazing and we have to mention that amazon sustains a blog regarding AI.

Intrigued? Create your own Artificial Intelligence based Program with APIs:

Artificial Intelligence surrounds us, from Tesla’s self-driving cars to Siri on your iPhone, Artificial intelligence comes to play even when we are operating a system to get the smallest amount of output. Cortana, Siri, Tesla, Cogito or our favorite platform to watch movies and series, Netflix, are all examples of artificial intelligence and it’s easy to get influenced by them.

In this technological era, nothing seems to be impossible, one can create his own form of Artificial intelligence by using an Application Programming Interface to make your own custom software.

Using A service that allows you to transform speech into text messages, it allows you to naturally process a language along with an Artificial intelligence system that will cater for your every need.

Step1: Login to their site and allow the program to access the basic data of your account. Accede to their terms and conditions and begin by creating your own artificial Intelligence based virtual assistant.

Step2: Authorize the access to basic information then customize your AI assistant by adding in some standard information, this information includes their Name, Description (what you intend your agent to be), language (the language your agent will be operating in) and the time zone.

Step3: The Test Console, allows you to test out the basic operations performed by your agent. It allows you to enter queries and how your agent will respond to them. Adding an additional small talk is based on your preferences, and you can do so by clicking the enable button.

Step4: Save the changes you have made and find your Artificial intelligence based assistants’ API keys. Feel free to make additional changes if you please then use JavaScript to connect to the

Step5: Use HTML5 speech recognition to get on the right track, communicate with the and host your web interface lastly but not the least, say “hello” to your Artificial Intelligence using, the state of the arts Virtual Assistant!

Making HTTP Requests in JavaScript

I really liked this detailed guide to consuming APIs through Javascript. No matter if you are using ReactJS or Angular or any other frontend framework, this is basic knowledge that you have to deal with at some point. Rahul leads by example here. A must read in my opinion.

The Introduction :

As you probably know very well by now, the internet is made up of a bunch of interconnected computers called servers. When you are surfing the web and navigating between web pages, our browser  requests information from any of these servers. The chart below explains explicitly on the request.


That is, our browser sends a request, waits for the server to respond to the request, and (once the server responds) processes the request. All of this is governed by protocols or rules which is a topic for another day.

Application Program Interface (API) 

Now the Wikipedia definition of the API will tell you, an Application Programming Interface (API) is a set of subroutine definitions, protocols, and tools for building 47.pngapplication software. But in layman terms , in the context of the web, the API’s generally allow you to send commands to programs running on the servers that…

View original post 640 more words

APIs for Authentication: A journey

Application Program Interface (API) key authentication is a technique that overcomes the hurdles of using shared credentials by using a unique key for each user. The key is usually in the form of a long series of letters and numbers that are different from the account login password. The owner provides the client with the key that helps the client access a website. When a client provides the said API key, the server allows the client to access data. The server has the power to limit administrative functions to any client for example in changing passwords, or deletion of accounts. API keys are sometimes used so that account passwords do not have to be given again and again. The APIs offer flexibility to limit control while also protecting user passwords.

API keys work a lot of different ways as they were conceived by multiple companies and they all have a different way of authentication. There are some API keys like Basic Auth that uses an established standard along with some strict rules. However, over time some familiar approaches are being used. These include putting the key in the Authorization header accompanying the username and password, another one just demands to add the key onto the URL. Sometimes keys are buried in the request body together with the data. Wherever the key is added the outcome is the same, the server provides access to the user.

There are different security protocols being used like OAuth1.0a, Basic API authentication with TLS and OAuth2.0. Basic Auth is the simplest because it only uses the standard framework or language library. Because it is the simplest hence, it offers the least security and provides no advanced options, you are simply providing a username and password that is Base64 encoded.

OAuth1.0a has, on the other hand, the most secure security protocol as it uses a cryptographic signature, combined with a token secret, none and other request based information. As the token is never directly passed across the wire so there is no possibility of anyone seeing a password in transit, this provides an edge to OAuth1.0a. On the other hand, this level of security comes with a lot of complexity. You have to use hashing algorithms with strict steps, but now this problem has been overcome as every other programming language can do it for you.

Repose is another API authentication platform that provides open source API validation, HTTP request logging, rate limiting and much more. It employs a RESTful middleware platform that is easily scalable and extensible. OAuth2.0 and Auth0 are both open sources API authenticators. Both have a completely different approach from OAuth1.0a. The encryptions are handled by TLS (previously called SSL) rather than using cryptographic algorithms. There are not that many OAuth2.0 libraries so this provides a disadvantage to users. OAuth2.0 is used by big names like Google and Twitter.

Auth0 is a platform that allows authentication of apps and supports just about all identity providers on any device or cloud. It uses a secure HTTPs API key to integrating with other tools giving it a seamless experience. It provides the clients with the ability to authenticate with credentials that they are comfortable with.

Many management platforms for API are available, each platform bringing something unique on the table. Kong is an API manager that offers a range of plugins to improve security, better authentication services and management of inbound and outbound traffic. Kong acts as a gateway between the client and the API, providing different layers of rate limiting, logging, and authentication.

3Scale is another manager that separates traffic control and management layers, as a result it produces superior and unsurpassed scalability. It integrates many gateway deployments with Amazon, Heroku, and Red HatOpenshift, which are free to use. Additionally, plugins can also be added to libraries built in several different languages and they design custom API management protocols for organizations as well. Microsoft Azure also provides a host of options for users so that little effort is done on the client’s part and most of the work is accomplished by managers. Azure uses a professional front end and developer portal that make it more user-friendly. It offers the greatest number of options for APIs and thus attracts more clients.

Del Boomi can be thought of as a cloud middleware, plumbing between applications that reside on cloud or premise. They can efficiently manage data for social networks and other uses. Boomi communicates with data across different or common domains, giving it an added advantage. MuleSoft is another API manager that makes use of Anypoint platform, thus it re-architects the SOA infrastructure covering legacy systems, proprietary platforms, and custom integration. This results in a strong and agile business solution for their clients.

AWS cognito is another management system offered by Amazon web services. They offer an adaptive multi-layer design that includes products which ensure availability and resilience. AWS cognito is built with security as its key feature. It can be easily deployed on any platform, using lock library or custom build implementation that can be chosen from more than 50 integrations. It enables clients to authorize users through an external identity provider that assigns temporary security credentials for users to access your website/app. It employs external identity providers that support OpenID, SAML, and the option to integrate your own identity provider.

Recently, API has found its applications in health-related fields. A vast majority of healthcare providers and other companies in the healthcare industry are making use of the web and mobile services. They provide vital information to patients and help them share information with other prescribers. Medical APIs will also help with the integration between partner providers, patient support services, insurance companies and government agencies. But are these API’s HIPPA compliant is a question many users have. Yes, there are many providers that meet the challenge of conforming to client demands while also ensuring the security of medical data.

Apigee Edge, another platform enhances digital value chain from the back end for customers who engage an app. It is HIPPA (Health Insurance Portability and Accountability Act) and PCI compliant. Apigee maintains management compliance by a number of features that include, encrypting and masking information, protecting traffic and managing and securing all data.

For healthcare providers, there are other API managers that provide HIPPA compliance like TrueVault. TrueVault acts as an interface between internal data and external applications. For instance, if a diagnostic laboratory wants to provide online viewing of test results, by making use of TrueVault they can allow approved third parties to access that information without the use of custom APIs or hooks. Hence, it provides a secure service that not only saves time but delivers information to the patients via mobile and tablet interfaces.

Still, there are many challenges that API managers face in making optimized solutions for the healthcare sector. Lack of access to effective tools required for testing and monitoring these interfaces are a serious obstacle for the developers. Furthermore, the developers lack insight and feedbacks in medical APIs which is a critical factor in developing elaborate and engaging APIs that will be widely adopted by the medical field.

Related Links:

  1. Apigee management compliance.

  1. MuleSoft API manager

  1. TrueVault Systems

  1. Microsoft Azure

  1. Del Boomi

  1. Kong API manager

  1. 3Scale management

  1. Akana API management solutions

  1. Auth0

  1. Repose API manager

  1. OAuth2.0

  1. OAuth1.0a