Sumedh Meshram

A Personal Blog

Artificial Intelligence in Software Development and Testing

According to Gartner, artificial intelligence will be omnipresent in all spheres of technology and will successfully make its presence prominent among the top investment priority of CIOs by 2020. Going by the figures of the market research firm, the worldwide scope for artificial intelligence in 2019 is approximately $6.36 billion in North America.

Technical masters like Amazon, Facebook, Google, and many others spend a huge sum of money on acquiring AI in software.

AI emerged as an enterprise technology and has changed the outlook of everything, including software development and software testing. It is, therefore, important that we take a minute to look into the role of artificial intelligence in software development and testing.

Higher Level of Precision

It is natural for humans to make errors. Even highly skilled testers sometimes end up making mistakes while performing annual testing. With automated testing, similar steps can be executed with precision every time the task of testing is undertaken and never miss an opportunity to notify specific outcomes. Testers are exempted from ongoing manual examination and they can have a more significant proportion of time to develop new automated software tests and manage chic properties.

Artificial intelligence can help to overcome the drawbacks of annual testing. It is practically unsustainable for leading software or quality assurance (QA) segments to perform a well-managed web app test with more than thousands of users. With the help of automated testing, the user can trigger tens, hundreds, or thousands of optical group of users who can communicate with a network, software, or web-based app.

Massive Support for Developers as Well as Testers

Developers can utilize the shared tests conducted by the computing device to monitor errors instantly before sending it for quality assurance. These tests can function automatically as and when the source code alterations are examined based on which the squad or the app builder can be notified accordingly if a test result turns out to be unsuccessful. Different properties like these help in securing a time for developers and boosts up their self-confidence.

Leveraging the Whole Test Scope

In software testing, with the help of artificial intelligence, the user can leverage the complete coverage and depth of tests, thereby leading to massive enhancement in software quality. Artificial intelligence-driven software testing enables looking into the storage capacity and document content, internal strategy states, and data tables to ascertain whether the software is acting as it should or not. On the whole, test automation can perform more than a thousand various test cases in each test run, offering a scope that would never have been possible through manual testing.

Less Time-Consuming and Helps in Quick Marketing

With the help of software tests being replicated, every time a source code is altered, repetitive manual tests can prove to be time-consuming and tremendously expensive.

On the other hand, once developed, machine learning and testing together can be performed continuously without the need to incur any extra expenses.

The total time taken for software testing can be reduced from two or three days to a few hours, which indirectly helps to save money.

To Wrap Up

Integrating artificial intelligence (AI) with software testing and software development can help to build a society where software can be swiftly examined, diagnosed, and modified.

Artificial intelligence testing will permit high-quality engineering and will decrease the total time taken for examination and development. As a result, it will help to secure time, money, and resources; while allowing testers to pay attention to performing prime activities such as launching quality software.

 

Build A Serverless Function

In this tutorial, you’ll build and publish a serverless function that generates QR codes, using Cloudflare Workers.

Demo

This tutorial makes use of Wrangler, our command-line tool for generating, building, and publishing projects on the Cloudflare Workers platform. If you haven’t used Wrangler, we recommend checking out the “Installing the CLI” part of our Quick Start guide, which will get you set up with Wrangler, and familiar with the basic commands.

If you’re interested in building and publishing serverless functions, this is the guide for you! No prior experience with serverless functions or Cloudflare Workers is assumed.

One more thing before you start the tutorial: if you just want to jump straight to the code, we’ve made the final version of the codebase available on GitHub. You can take that code, customize it, and deploy it for use in your own projects. Happy coding!

Prerequisites

To publish your QR Code Generator function to Cloudflare Workers, you’ll need a few things:

  • A Cloudflare account, and access to the API keys for that account
  • A Wrangler installation running locally on your machine, and access to the command-line

If you don’t have those things quite yet, don’t worry. We’ll walk through each of them and make sure we’re ready to go, before you start creating your application.

You’ll need to get your Cloudflare API keys to deploy code to Cloudflare Workers: see “Finding your Cloudflare API keys” for a brief guide on how to find them.

Generate

Cloudflare’s command-line tool for managing Worker projects, Wrangler, has great support for templates – pre-built collections of code that make it easy to get started writing Workers. We’ll make use of the default JavaScript template to start building your project.

In the command line, generate your Worker project, using Wrangler’s worker-template, and pass the project name “qr-code-generator”:

wrangler generate qr-code-generator
cd qr-code-generator

Wrangler templates are just Git repositories, so if you want to create your own templates, or use one from our Template Gallery, there’s a ton of options to help you get started.

Cloudflare’s worker-template includes support for building and deploying JavaScript-based projects. Inside of your new qr-code-generator directory, index.js represents the entry-point to your Cloudflare Workers application.

All Cloudflare Workers applications start by listening for fetch events, which are fired when a client makes a request to a Workers route. When that request occurs, you can construct responses and return them to the user. This tutorial will walk you through understanding how the request/response pattern works, and how we can use it to build fully-featured applications.

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

/**
 * Fetch and log a request
 * @param {Request} request
 */
async function handleRequest(request) {
  return new Response('Hello worker!', { status: 200 })
}

In your default index.js file, we can see that request/response pattern in action. The handleRequest constructs a new Response with the body text “Hello worker”, as well as an explicit status code of 200.

When a fetch event comes into the worker, the script uses event.respondWith to return that new response back to the client. This means that your Cloudflare Worker script will serve new responses directly from Cloudflare’s cloud network: instead of continuing to the origin, where a standard server would accept requests, and return responses, Cloudflare Workers allows you to respond quickly and efficiently by constructing responses directly on the edge.

Build

Any project you publish to Cloudflare Workers can make use of modern JS tooling like ES modules, NPM packages, and async/await functions to put together your application. In addition, simple serverless functions aren’t the only thing you can publish on Cloudflare Workers: you can build full applications using the same tooling and process as what we’ll be building today.

The QR code generator we’ll build in this tutorial will be a serverless function that runs at a single route and receives requests. Given text sent inside of that request (such as URLs, or strings), the function will encode the text into a QR code, and serve the QR code as a PNG response.

Handling requests

Currently, our Workers function receives requests, and returns a simple response with the text “Hello worker!”. To handle data coming in to our serverless function, check if the incoming request is a POST:

async function handleRequest(request) {
  if (request.method === 'POST') {
    return new Response('Hello worker!', { status: 200 })
  }
}

Currently, if an incoming request isn’t a POST, response will be undefined. Since we only care about incoming POST requests, populate response with a new Responsewith a 500 status code, if the incoming request isn’t a POST:

async function handleRequest(request) {
  let response
  if (request.method === 'POST') {
    response = new Response('Hello worker!', { status: 200 })
  } else {
    response = new Response('Expected POST', { status: 500 })
  }
  return response
}

With the basic flow of handleRequest established, it’s time to think about how to handle incoming valid requests: if a POST request comes in, the function should generate a QR code. To start, move the “Hello worker!” response into a new function, generate, which will ultimately contain the bulk of our function’s logic:

const generate = async request => {
  return new Response('Hello worker!', { status: 200 })
}

async function handleRequest(request) {
  // ...
  if (request.method === 'POST') {
    response = await generate(request)
  // ...
}

Building a QR Code

All projects deployed to Cloudflare Workers support NPM packages, which makes it incredibly easy to rapidly build out a lot of functionality in your serverless functions. The qr-image package is a great way to take text, and encode it into a QR code, with support for generating the codes in a number of file formats (such as PNG, the default, and SVG), and configuring other aspects of the generated QR code. In the command-line, install and save qr-image to your project’s package.json:

npm install --save qr-image

In index.js, require the qr-image package as the variable qr. In the generatefunction, parse the incoming request as JSON, using request.json, and use the text to generate a QR code using qr.imageSync:

const qr = require('qr-image')

const generate = async request => {
  const body = await request.json()
  const text = body.text
  const qr_png = qr.imageSync(text || 'https://workers.dev')
}

By default, the QR code is generated as a PNG. Construct a new instance of Response, passing in the PNG data as the body, and a Content-Type header of image/png: this will allow browsers to properly parse the data coming back from your serverless function, as an image:

const generate = async request => {
  // ...
  return new Response(qr_png, { headers })
}

With the generate function filled out, we can simply wait for the generation to finish in handleRequest, and return it to the client as response:

async function handleRequest(request) {
  // ...
  if (request.method === 'POST') {
    response = await generate(request)
  // ...
}

Testing In a UI

The serverless function will work if a user sends a POST request to a route, but it would be great to also be able to test it with a proper interface. At the moment, if any request is received by your function that isn’t a POST, a 500 response is returned. The new version of handleRequest should return a new Response with a static HTML body, instead of the 500 error:

const landing = `
<h1>QR Generator</h1>
<p>Click the below button to generate a new QR code. This will make a request to your serverless function.</p>
<input type="text" id="text" value="https://workers.dev"></input>
<button onclick='generate()'>Generate QR Code</button>
<p>Check the "Network" tab in your browser's developer tools to see the generated QR code.</p>
<script>
  function generate() {
    fetch(window.location.pathname, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ text: document.querySelector("#text").value })
    })
  }
</script>
`

async function handleRequest(request) {
  let response
  if (request.method === 'POST') {
    response = await generate(request)
  } else {
    response = new Response(landing, { headers: { 'Content-Type': 'text/html' } })
  }
  return response
}

The landing variable, which is a static HTML string, sets up an input tag and a corresponding button, which calls the generate function. This function will make an HTTP POST request back to your serverless function, allowing you to see the corresponding QR code image data inside of your browser’s network inspector:

With that, your serverless function is complete! The full version of the code looks like this:

const qr = require('qr-image')

const generate = async request => {
  const { text } = await request.json()
  const headers = { 'Content-Type': 'image/png' }
  const qr_png = qr.imageSync(text || 'https://workers.dev')
  return new Response(qr_png, { headers })
}

const landing = `
<h1>QR Generator</h1>
<p>Click the below button to generate a new QR code. This will make a request to your serverless function.</p>
<input type="text" id="text" value="https://workers.dev"></input>
<button onclick='generate()'>Generate QR Code</button>
<p>Check the "Network" tab in your browser's developer tools to see the generated QR code.</p>
<script>
  function generate() {
    fetch(window.location.pathname, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ text: document.querySelector("#text").value })
    })
  }
</script>
`

async function handleRequest(request) {
  let response
  if (request.method === 'POST') {
    response = await generate(request)
  } else {
    response = new Response(landing, { headers: { 'Content-Type': 'text/html' } })
  }
  return response
}

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

Publish

And with that, you’re finished writing the code for the QR code serverless function, on Cloudflare Workers!

Wrangler has built-in support for bundling, uploading, and releasing your Cloudflare Workers application. To do this, we’ll run wrangler publish, which will build and publish your code:

Publish

Resources

In this tutorial, you built and published a serverless function to Cloudflare Workers for generating QR codes. If you’d like to see the full source code for this application, you can find it on GitHub.

If you enjoyed this tutorial, we encourage you to explore our other tutorials for building on Cloudflare Workers:

If you want to get started building your own projects, check out the quick-start templates we’ve provided in our Template Gallery.

DevOps: The Journey So Far and What Lies Ahead

If you have been in the IT industry for over 10 years, I am sure you have seen the evolution and massive transformation DevOps brought in as organizations continue to shift from optimizing for the cost to optimizing for speed, and that shift is exponentially increasing its pace to adopt DevOps. Today, when I see life before and after DevOps, I can easily see that some terminologies have been changed in our daily life primarily because of DevOps adoption.

a) Manual => Automated

b) Physical Datacenter => Virtual Private Cloud

c) Outages => High Availability/Zero downtime

d) Enterprise/Web archives => Containers

and the list goes on and on…

DevOps, which started as a buzzword, is now becoming a standard for every organization in order to meet the demands of time to market and release better products to stay ahead of the competition, and that’s the reason big names like Google, Netflix, Amazon, and Facebook are heavily investing in it and have experienced the value coming out of it.

So, what exactly has DevOps changed?

No More Working in Silos

DevOps has improved the software development culture and mindset. Welcoming new changes, blameless culture, transparency, accountability, embracing failures, right collaborations, and communication between different teams are some of the keys organizations have enabled to successfully unlock DevOps culture.

Time to Market

DevOps enables organizations to develop and deploy software faster and more efficiently enabled by an end-to-end automated and integrated process using CI/CD pipelines. Continuous Delivery allows developers to continuously roll out tested code that is always in a production-ready state and can be released to production based on business approval. As soon as a new feature or story is complete, the code is immediately available for deployment to test environment, UAT, or production.

DevOps-as-Code

Over the last few years, there has been a tremendous development on how automation has been done. Pipeline-as-Code to automate CI/CD pipelines, Config-as-Code to manage configurations and orchestration tasks, Infrastructure-as-Code to automate environment provisioning are gaining momentum. Languages like Groovy and Python where the core OOPS concepts are required are being used for most of the automation. Also, unit test cases are written for every automation to validate their code similar to how application development tests their code. This has enabled and encourages many application developers to understand DevOps workflows holistically, gain expertise, and contribute, which was earlier mere a black box for them.

Software Killed Hardware

Gone were the days when sysadmins were heavily involved in receiving new hardware, and then setting and configuring new servers with each server having a custom configuration. Today the servers, network, firewalls, load balancers, and everything else is virtual, living somewhere at Amazon or Google or Microsoft. Today, you write software to provision, manage and decommission infrastructure. Upgrading resources to a new server, adding identical servers, securing infrastructure are all driven by software.

Containers and Microservices to Maximize Deployment Velocity

Microservices have enabled developers to have the freedom to make changes to one service, create a Docker image, and deploy it independently without impacting other services in the system. If there is an issue in any service, it can easily be isolated to one single service so that a fast rollback can be made easily. This speed of deployment with minimal risk is the primary reason why organizations like Netflix and Amazon have adopted microservices-based architectures, ensuring they are eliminating as many bottlenecks as possible to release the application to end users. Platform-as-a-Service tools like Amazon ECS, Google Kubernetes, and Redhat OpenShift have helped enterprises to adapt to microservices architecture and migrating their existing applications to microservices and dockerized in production.

Cloud Encourages the Birth of Many Startup Businesses

Cloud computing provides an added advantage to the start-up business. Businesses previously required heavy time and money for housing, powering, and cooling infrastructure. With the cloud, there are limited upfront capital costs as it provides the ability to match revenue with expenses since you pay only for the resources you use. For a festive season or other cases of peak traffic, you can easily scale up and down your infrastructure.

DevOps Predictions

With technology evolving at a rapid pace, DevOps will continue to gain momentum and break barriers. Here are high-level predictions what lies ahead in DevOps world.

Cloud Migration

Cloud adoption will continue to evolve. New startup businesses are already adopting cloud for hosting their applications. There has been a substantial increase in the big enterprise giants who migrated their physical data centre to the cloud and this trend will continue to gain momentum for their survival.

Continuous Deployment Is So Close, Yet So Far

While continuous integration and continuous delivery practices have enabled organizations to release new features to the market in the most efficient way, there are hardly any buyers who want to adopt continuous deployment. Product-based companies like Amazon and Netflix are making frequent strides with continuous deployment, but financial firms are still focused on having a robust application and infrastructure with performance and security being their primary areas, with the desire to release features to market using a manual trigger.

Serverless Computing

Renowned training company A Cloud Guru is running their application on serverless architecture. There are no infrastructure costs they have to pay, as they pay based on the number of visits they get to their course content. This allows them to offer their courses cheaply, which also provides them with an added competitive advantage as compared to their competitors. AWS Lambda, Google Cloud Functions, Azure Functions are all examples of Function-as-a-Service platforms that support serverless architecture. The only concerns many IT leaders shared are their fears over vendor lock-in. Choosing a cross-vendor programming language and adopting standardized services over fully managed services provided OOTB by cloud provider are two ways to avoid vendor lock-in fear when adopting serverless architecture.

DevSecOps 

Security is going to play a major role as more and more applications will be migrated to the cloud. Whether it’s an application, VM, container or an entire network, one needs to understand the entire process and lifecycle and ensure there is no corner left for any vulnerability for an outsider to hack into your application or your system and this can be achieved only when you integrate all flavours of security testing in your DevOps process.

SRE for Service Management

In general, an SRE team’s responsibility is to ensure the service is available all the time, and that application health and monitoring and emergency response that has been done by the Ops team. However, this is changing and organizations are looking for engineers who can code as well and take care of ops. For example, Google has put a cap of 50% on the overall ops work for all SREs and in the remaining 50% of the time, SREs are actually doing development. They found this model has many advantages as SREs are directly modifying code, building and supporting the system, and bridging the gap between the product development teams and SREs during cross training of new features release.

Cognitive DevOps

Cognitive DevOps will excel in developing an automated system that should be capable of resolving problems and providing solutions without any human intervention. It uses machine learning algorithms that will help deal with the real-time challenges faced in DevOps by gathering and analyzing data across different environments which will eventually lead to smooth and error-free releases. IT operations analytics, network performance analytics, security analytics, application performance management, digital performance management, and algorithmic IT operations are some of the key areas vendors are targeting to implement cognitive operations in the journey to move from DevOps to NoOps.

I would like to summarize this blog by stating that technology is changing at a very rapid pace and DevOps is fueling the demand with its workflows, tools, and practices. With so much already achieved in the last decade and so much to achieve in years to come, the DevOps journey ahead will be exciting and full of surprises.

Which Country Has The Best Programming Language Programmer?

Programming Language is at the heart of every technological innovation. Therefore, a country with best computer programmers can be considered technologically advanced in today's world. Comparing countries to determine which of them have the best computer programmers is slightly complicated as different countries have different popular size. Luckily, HackerRank makes it easy with their own set of metrics to measure the excellence of the programmers of different countries. According to HackerRank, the following is the list of the top 10 countries with the best computer programmers.
 
1. China - The reason for China to occupy the top position is not its population. The metrics for the evaluation of the best computer programmers is speed and accuracy. HackerRank holds special challenges on its website annually to determine the best programmers countrywise. The challenges focus on coding skills, data structure and algorithmic concepts, mathematical and analytical skills, and functional programming. The participants from China have outshined all other countries collectively.
2. Russia - It is said that Russia has the best hackers in the world and the world has allegedly seen their hacking skills. To be a hacker, you need to be a programmer to the topmost level. Russian programmers scored 99.9 when China programmers scored the full mark. However, they have come out better than China in algorithms.
 
3. Poland - This can come as a surprise to many as Poland is not known to many as a country with many multi-national tech companies. However, if you know how good the education system is in Poland, you will not wonder why they have managed to rank so high. Computer programming is taught in the lower classes in schools. Therefore, by the time the students go out of high school, they have master computer programming languages like Java and Python. This is also reflected by the fact that Poland programmers have won Java challenge on HackerRank ahead of all other countries.
 
4. Switzerland - Switzerland is the country with headquarters of multiple international tech companies. In fact, Switzerland computer programmers are the most dominant on the scoreboard of HackerRank challenges. It is interesting to note Switzerland is where one of the foremost computer programming languages Pascal has come from. Besides, Switzerland is among the leading countries in the Global Innovation Index.
 
5. Hungary - The Hungarian Government has introduced programming classes in primary and secondary schools, and therefore, the students are grooming to be programmers from childhood. They have the best performance in tutorial challenges on HackerRank. It is somewhat surprising to many that among various other technologically advanced European countries, Hungary is among top 5. It is all about the education system and grooming from an early stage.
6. Japan - Japan is now known as the country of cryptocurrency. The revolutionary blockchain technology has originated from Japan and now ruling the world. In fact, according to HackerRank challenges, Japan is the leader in artificial intelligence. This only shows the intelligence and skill set of the Japanese computer programmers. Japan has literally transformed in the last decade and labeled as one of the leaders in innovations.
 
7. Taiwan - Taiwan, and China go hand in hand, and Taiwan is considered to be one of the most advanced countries in technologies. They are super fast in adapting to the new programming languages, and according to a survey, Python is the most dominant language in the country. On HackerRank, computer programmers from Taiwan are one of the leaders in algorithms, data structure, and functional programming challenges. Therefore, the programmers are an all-rounder, and it is this all-around growth that is accelerating the country to a new height in the technological field.
 
8. France - The French Government made major changes in the education system to inspire students to become computer programmers.  Just like Poland, they have started to offer programming classes in elementary schools since 2014, and the result is here to see. Their rank is decreasing every year, and they are climbing up faster than most countries on HackerRank board. 9. Czech Republic -
 
9. Czech Republic - According to HackerRank, Czech Republic has the most dominating computer programmers in shell scripting, and it is proved through several challenges they have held. The programmers also rank second in the mathematical challenges which reflect their skill in functional programming.
 
10. Italy - Italy is slowly but steadily becoming one of the emerging countries in computer programming. Big companies are investing hugely in Italy to bag the top programmers in the country. Apple announced a new school for nearly 1000 programmers in Italy. The programmers from the country have performed exceedingly well on HackerRank in database and tutorial challenges. Some of you might be surprised to find that the US or India do not feature among the top 10 countries. India ranks at 31st while the US ranks at 13th as per HackerRank ranking based on challenges organized on the website. 
 

 

Source: HOB

 

 

Cutting Edge - REST and Web API in ASP.NET Core

I’ve never been a fan of ASP.NET Web API as a standalone framework and I can’t hardly think of a project where I used it. Not that the framework in itself is out of place or unnecessary. I just find that the business value it actually delivers is, most of the time, minimal. On the other hand, I recognize in it some clear signs of the underlying effort Microsoft is making to renew the ASP.NET runtime pipeline. Overall, I like to think of ASP.NET Web API as a proof of concept for what today has become ASP.NET Core and, specifically, the new runtime environment of ASP.NET Core.

Web API was primarily introduced as a way to make building a RESTful API easy and comfortable in ASP.NET. This article is about how to achieve the same result—building a RESTful API—in ASP.NET Core.

The Extra Costs of Web API in Classic ASP.NET

ASP.NET Web API was built around the principles sustaining the Open Web Interface for .NET (OWIN) specification, which is meant to decouple the Web server from hosted Web applications. In the .NET space, the introduction of OWIN marked a turning point, where the tight integration of IIS and ASP.NET was questioned. That tight coupling was fully abandoned in ASP.NET Core.

Any Web façade built using the ASP.NET Web API framework relies on a completely rewritten pipeline that uses the standard OWIN interface to dialog with the underlying host Web server. Yet, an ASP.NET Web API is not a standalone application. To be available for callers it needs a host environment that takes care of listening to some configured port, captures incoming requests and dispatches them down the Web API pipeline.

A Web API application can be hosted in a Windows service or in a custom console application that implements the appropriate OWIN interfaces. It can also be hosted by a classic ASP.NET application, whether targeting Web Forms or ASP.NET MVC. Over the past few years, hosting Web API within a classic ASP.NET MVC application proved to be a very common scenario, yet one of the least effective in terms of raw performance and memory footprint.

As Figure 1 shows, whenever you arrange a Web API façade within an ASP.NET MVC application, three frameworks end up living side-by-side, processing every single Web API request. The host ASP.NET MVC application is encapsulated in an HTTP handler living on top of system.web—the original ASP.NET runtime environment. On top of that—taking up additional memory—you have the OWIN-based pipeline of Web API.

Frameworks Involved in a Classic ASP.NET Web API Application 
Figure 1 Frameworks Involved in a Classic ASP.NET Web API Application

The vision of introducing a server-independent Web framework is, in this case, significantly weakened by the constraints of staying compatible with the existing ASP.NET pipeline. Therefore, the clean and REST-friendly design of Web API doesn’t unleash its full potential because of the legacy system.web assembly. From a pure performance perspective, only some edge use cases really justify the use of Web API.

Effective Use Cases for Web API

Web API is the most high-profile example of the OWIN principles in action. A Web API library runs behind a server application that captures and forwards incoming requests. This host can be a classic Web application on the Microsoft stack (Web Forms, ASP.NET MVC) or it can be a console application or a Windows service.

In any case, it has to be an application endowed with a thin layer of code capable of dialoging with the Web API listener.

Hosting a Web API outside of the Web environment removes at the root any dependency on the system.web assembly, thus magically making the request pipeline as lean and mean as desired.

This is the crucial point that led the ASP.NET Core team to build the ASP.NET Core pipeline. The ideal hosting conditions for Web API have been reworked to be the ideal hosting conditions for just about any ASP.NET Core application. This enabled a completely new pipeline devoid of dependencies on the system.web assembly and hostable behind an embedded HTTP server exposing a contracted interface—the IServer interface.

The OWIN specification and Katana, the implementation of it for the IIS/ASP.NET environment, play no role in ASP.NET Core. But the experience with these platforms matured the technical vision (especially with Web API edge cases), which shines through the dazzling new pipeline of ASP.NET Core.

The funny thing is that once the entire ASP.NET pipeline was redesigned—deeply inspired by the ideal hosting environment for Web API—that same Web API as a separate framework ceased to be relevant. In the new ASP.NET Core pipeline there’s the need for just one application model—the MVC application model—based on controllers, and controller classes are a bit richer than in classic ASP.NET MVC, thus incorporating the functions of old ASP.NET controllers and Web API controllers.

Extended ASP.NET Core Controllers

In ASP.NET Core, you work with controller classes whether you intend to serve HTML or any other type of response, such as JSON or PDF. A bunch of new action result types have been added to make building RESTful interfaces easy and convenient. Content negotiation is fully supported for any controller classes, and formatting helpers have been baked into the action invoker infrastructure. If you want to build a Web API that exposes HTTP endpoints, all you do is build a plain controller class, as shown here:

 
public class ApiController : Controller
{
  // Your methods here
}

The name of the controller class is arbitrary. While having /api somewhere in the URL is desirable for clarity, it’s in no way required. You can have /api in the URL being invoked both if you use conventional routing (an ApiController class) to map URLs to action methods, or if you use attribute routing. In my personal opinion, attribute routing is probably preferable because it allows you to expose multiple endpoints with the same /api item in the URL, while being defined in distinct, arbitrarily named controller classes.

The Controller class in ASP.NET Core has a lot more features than the class in classic ASP.NET MVC, and most of the extensions relate to building a RESTful Web API. First and foremost, all ASP.NET Core controllers support content negotiation. Content negotiation refers to a silent negotiation taking place between the caller and the API regarding the actual format of returned data.

Content negotiation doesn’t happen all the time and for just every request. It takes place only if the incoming request contains an Accept HTTP header that advertises the MIME types the caller is able to understand. In this case, the ASP.NET Core infrastructure goes through the types listed in the header content until it finds one for which a formatter exists in the current configuration of the application. If no matching formatter is found in the list of types, then the default JSON formatter is used, like so:

 
[HttpGet]
public ObjectResult Get(Guid id)
{
  // Do something here to retrieve the resource data
  var data = FindResourceDataInSomeWay(id);
  return Ok(data);
}

Another remarkable aspect of content negotiation is that while it won’t produce any change in the serialization process without an Accept HTTP header, it’s technically triggered only if the response being sent back by the controller is of type ObjectResult. The most common way to return an ObjectResult action result type is by serializing the response via the Ok method. It’s important to note that if you serialize the controller response via, say, the Json method, no negotiation will ever take place regardless of the headers sent. Support for output formatters can be added programmatically through the options of the AddMvc method. Here’s an example:

 
services.AddMvc(options =>
{
  options.OutputFormatters.Add(new PdfFormatter());
});

In this example, the demo class PdfFormatter contains internally the list of supported MIME types it can handle.

Note that by using the Produces attribute you override the content negotiation, as shown here:

 
[Produces("application/json")]
public class ApiController : Controller
{
  // Action methods here
}

The Produces attribute, which you can apply at the controller or method level, forces the output of type ObjectResult to be always serialized in the format specified by the attribute, regardless of the Accept HTTP header.

For more information on how to format the response of a controller method, you might want to check out the content at bit.ly/2klDgdY.

REST-Oriented Action Result Types

Whether a Web API is better off with a REST design is a highly debatable point. In general, it’s safe enough to say that the REST approach is based on a known set of rules and, in this regard, it is more standard. For this reason, it’s generally recommended for a public API that’s part of the enterprise business. If the API exists only to serve a limited number of clients—mostly under the same control of the API creators—then no real business difference exists between using REST design route or a looser remote-procedure call (RPC) approach.

In ASP.NET Core, there’s nothing like a distinct and dedicated Web API framework. There are just controllers with their set of action results and helper methods. If you want to build a Web API whatsoever, you just return JSON, XML or whatever else. If you want to build a RESTful API, you just get familiar with another set of action results and helper methods. Figure 2 presents the new action result types that ASP.NET Core controllers can return. In ASP.NET Core, an action result type is a type that implements the IActionResult interface. 

Figure 2 Web API-Related Action Result Types  

Type Description
AcceptedResult Returns a 202 status code. In addition, it returns the URI to check on the ongoing status of the request. The URI is stored in the Location header.
BadRequestResult Returns a 400 status code.
CreatedResult Returns a 201 status code. In addition, it returns the URI of the resource created, stored in the Location header.
NoContentResult Returns a 204 status code and null content.
OkResult Returns a 200 status code.
UnsupportedMediaTypeResult Returns a 415 status code.


Note that some of the types in Figure 2 come with buddy types that provide the same core function but with some slight differences. For example, in addition to AcceptedResult and CreatedResult, you find xxxAtActionResult and xxxAtRouteResult types. The difference is in how the types express the URI to monitor the status of the accepted operation and the location of the resource just created. The xxxAtActionResult type expresses the URI as a pair of controller and action strings whereas the xxxAtRouteResult type uses a route name.

OkObjectResult and BadRequestObjectResult, instead, have an xxxObjectResult variation. The difference is that object result types also let you append an object to the response. So OkResult just sets a 200 status code, but OkObjectResult sets a 200 status code and appends an object of your choice. A common way to use this feature is to return a ModelState dictionary updated with the detected error when a bad request is handled.

Another interesting distinction is between NoContentResult and EmptyResult. Both return an empty response, but NoContentResult sets a status code of 204, whereas EmptyResult sets a 200 status code. All this said, building a RESTful API is a matter of defining the resource being acted on and arranging a set of calls using the HTTP verb to perform common manipulation operations. You use GET to read, PUT to update, POST to create a new resource and DELETE to remove an existing one. Figure 3 shows the skeleton of a RESTful interface around a sample resource type as it results from ASP.NET Core classes.

Figure 3 Common RESTful Skeleton of Code
 
[HttpGet]
public ObjectResult Get(Guid id)
{
  // Do something here to retrieve the resource
  var res = FindResourceInSomeWay(id);
  return Ok(res);
}
[HttpPut]
public AcceptedResult UpdateResource(Guid id, string content)
{
  // Do something here to update the resource
  var res = UpdateResourceInSomeWay(id, content);
  var path = String.Format("/api/resource/{0}", res.Id);
  return Accepted(new Uri(path));  
}
[HttpPost]
public CreatedResult AddNews(MyResource res)
{
  // Do something here to create the resource
  var resId = CreateResourceInSomeWay(res);
  // Returns HTTP 201 and sets the URI to the Location header
  var path = String.Format("/api/resource/{0}", resId);
  return Created(path, res);
}
[HttpDelete]
public NoContentResult DeleteResource(Guid id)
{
  // Do something here to delete the resource
  // ...
  return NoContent();
}

If you’re interested in further exploring the implementation of ASP.NET Core controllers for building a Web API, have a look at the GitHub folder at bit.ly/2j4nyUe.

Wrapping Up

A Web API is a common element in most applications today. It’s used to provide data to an Angular or MVC front end, as well as to provide services to mobile or desktop applications. In the context of ASP.NET Core, the term “Web API” finally achieves its real meaning without ambiguity or need to further explain its contours. A Web API is a programmatic interface comprising a number of publicly exposed HTTP endpoints that typically (but not necessarily) return JSON or XML data to callers. The controller infrastructure in ASP.NET Core fully supports this vision with a revamped implementation and new action result types. Building a RESTful API in ASP.NET has never been easier!

5 tools for programmers to increase productivity

 

Programming complex code is undoubtedly a difficult task. Programmers often rely on certain online tools to make life easier and achieve speed and accuracy. These tools allow developers to create, test and debug the software. 

With constant technological advancements, developers are looking to enhance their productivity and stay updated with the evolving skill requirements. Here are some tools that programmers must explore to be more productive. 

#1. GitKraken

Quoting from their website, “Axosoft GitKraken is a cross-platform Git client with efficiency, elegance and reliability at the core. It is made for developers by developers”. 

GitKraken is known for its user-friendly interface, easy switching between projects and graphical interface which helps developers to visualize project branches effectively. 

#2. Visual Studio Code

Assuring a frictionless edit-build-debug cycle, VS code ensures high productivity while syntax highlighting, bracket-matching, box-selection and more. Additionally, it supports a wide variety of languages. For debugging, VS provides an interactive debugger to inspect codes and execute commands. 

#3. Docker

Docker is an open source tool which enables developers to create, deploy and run applications using containers. This guarantees that the application will run efficiently across Linux platforms irrespective of the undergone customizations. Docker requires applications to be shipped with things that are not already running on the host computer, significantly boosting the performance. 

#4. Chrome DevTools 

Chrome DevTools is a set of tools built directly into the Google Chrome browser. Websites can be designed better and faster using this tool as it allows the developers to edit pages on-the-go and rectify problems swiftly. It caters to the needs of both beginners and experts by teaching the basics as well as performing higher-level operations like optimizing website speed. 

#5. Postman 

Through design, testing and full production, Postman simplify API development for developers ensuring greater productivity. Developers can create automated tests to monitor their API and examine responses for debugging among other functions. With almost 6 million users, Postman is a widely used productivity tool within the developer community.

25 basic Linux terminal commands to remember

25 basic Linux terminal commands to remember

On Linux, the command-line is a powerful tool. Once you understand how to use it, it’s possible to accomplish a whole lot of advanced operations really fast. Sadly, new users find the Linux command-line confusing, and don’t know where to start.

In an effort to educate new users on the Linux command-line, we’ve made a list of 25 basic Linux terminal commands to remember. Let’s get started!

1. ls

ls is the list directory command. In order to use it, launch a terminal window and type the command ls.

 
 

 

 
 

 

ls

The ls command can also be used to reveal hidden files with the “a” command line switch.

ls -a

2. cd

cd is how you change directories in the terminal. To swap to a different directory from where the terminal started, do:

cd /path/to/location/

It is also possible to go backwards up a directory by using “..”.

cd ..

3. pwd

To show the current directory in the linux terminal use the pwd command.

 
 

 

 
 

 

pwd

4. mkdir

If you’d like to create a new folder, use the mkdir command.

mkdir

To preserve the permissions of the folder to match the permissions of the directory that came before it, use the “p” command line switch.

mkdir -p name-of-new-folder

5. rm

To delete a file from the command line, use the rm command.

rm /path/to/file

rm can also be used to delete a folder if there are files inside of it by making use of the “rf” command line switch.

rm -rf /path/to/folder

6. cp

Want to make a copy of a file or folder? Use the cp command.

 
 

 

 
 

 

To copy a file, use cp followed by the location of the file.

cp /path/to/file

Or, to copy a folder, use cp with the “r” command line switch

cp -r /path/to/folder

7. mv

The mv command can do a lot of things on Linux. It can move files around to different locations, but it can also rename files.

To move a file from one location to another, try the following example.

mv /path/to/file /place/to/put/file|

If you want to move a folder, write the location of the folder followed by the desired location where you’d like to move it.

mv /path/to/folder /place/to/put/folder/

Lastly, to rename a file or folder, cd into the directory of the file/folder you’d like to rename, and then use the mv command, for example:

mv name-of-file new-name-of-file

Or, for a folder, do:

mv name-of-folder new-name-of-folder

8. cat

The cat command lets you view the contents of files in the terminal. To use cat write the command out followed by the location of the file you’d like to view. For example:

 
 
cat /location/of/file

9. head

Head lets you view the top 10 lines of a file. To use it, enter the head command followed by the location of the file.

head /location/of/file

10. tail

Tail lets you view the bottom 10 lines of a file. To use it, enter the tail command followed by the location of the file.

tail /location/of/file

11. ping

On Linux, the ping command lets you check the latency between your network and a remote internet or LAN server.

 
 
ping website.com

Or

ping IP-address

To ping only a few times, use the ping command followed by the “c” command line switch and a number. For example, to ping Google 3 times, do:

ping google.com -c3

12. uptime

To check how long your Linux system has been online, use the uptime command.

uptime

13. uname

The uname command can be used to view your current distribution codename, release number, and even the version of Linux you are using. To use uname, write the command followed by the “a” command line switch.

Using the “a” command line switch prints out all information, so it’s best to use this instead of all other options.

uname -a

14. man

The man command lets you view the instruction manual of any program. To take a look at the manual, run the man command followed by the name of the program. For example, to view the manual of cat, run:

man cat

15. df

Df is a way to easily view how much space is taken up on the file system(s) on Linux. To use it, write the dfcommand.

df

To make df more easily readable, use the “h” command line switch. This puts the output in “human readable” mode.

df -h

16. du

Need to view the space that a directory on your system is taking up? Make use of the du command. For example, to see how big your /home/ folder is, do:

du ~/

To make the du output more readable, try the ‘hr” command-line switch. This will put the output in “human readable” mode.

du ~/ -hr

17. whereis

With whereis, it’s possible to track down the exact location of an item in the command-line. For example, to find the location of the Firefox binary on your Linux system, run:

whereis firefox

18. locate

Searching  for files, programs and folders on the Linux command-line is made easy with locate. To use it, just write out the locate command, followed by a search term.

locate search-term

19. grep

With the grep command, it’s possible to search for a pattern. A good example use of the grep command is to use it to filter out a specific line of text in a file.

Understand that grep isn’t a command that should ever be run by itself. Instead, it must be combined, like so:

cat text-file.txt | grep 'search term'

Essentially, to use grep to search for patterns, remember this formula:

command command-operations | grep 'search term'

20. ps

To view current running processes directly from the Linux terminal, make use of the ps command.

ps

Need a more full, detailed report of processes? Run ps with aux.

ps aux

21. kill

Sometimes, you need to kill a problem program. To do this, you’ll need to take advantage of the kill command. For example, to close Firefox, do the following.

First, use pidof to find the process number for Firefox.

pidof

Then, kill it with the kill command.

kill process-id-number

Still won’t close? Use the “9” command-line switch.

kill -9 process-id-number

22. killall

Using the killall command, it’s possible to end all instances of a running program. To use it, run the killall command followed by the name of a program. For example, to kill all running Firefox processes, do:

killall firefox

23. curl

Need to download a file from the internet through the Linux terminal? Use curl! To start a download, write the curl command followed by the file’s URL, the symbol and the location you’d like to save it. For example:

curl https://www.download.com/file.zip > ~/Downloads/file.zip

24. free

Running out of memory? Check your swap space and free RAM space with the free command.

free

25. chmod

With chmod, it’s possible to update the permissions of a file or folder.

To update the permissions of a file so everyone on the PC can read, write and execute it, do:

chmod +rwx /location/of/file-or/folder/

To update the permissions so only the owner has access, try:

chmod +rw

To update permissions for a specific group or world on the Linux system, run:

chmod +rx

Conclusion

The Linux command-line has endless actions and operations to know, and even after getting through this list, you’ll still have a lot more to learn. That said, this list is sure to help beef up your command-line knowledge. Besides, everyone has to start somewhere!

6 Most Demanded Programming Languages of 2019

6 Most Demanded Programming Languages of 2019

ideal programming languages

Learning the right programming language at the right time is very important. If you are a student or an aspiring software developer who is planning to learn a new programming language, you should check the trend once.

There are many job portals and trend analysis websites who releases the list of popular languages at a regular interval of time. These lists not only help students and professional to get an idea about the most in-demand languages out there but also shed some light on jobs availability. Today, I will share seven most demanding programming languages based on the number of jobs available on Indeed in January 2019.

Most In-Demand Programming Languages of 2019

 

1. Java – 65,986 jobs

Java was developed by James Gosling at Sun Microsystems and later acquired by the Oracle Corporation. This is one of the most used languages in the world. Considering the number, the number of jobs postings have been grown by 6% as compared to the last year.

Java is based on the “write once, run everywhere (WORA)” concept. When you compile Java code, it’s converted into bytecode, and the can run on any platform with any need of recompilation. That’s why it’s also called a platform-independent language.

Read: 5 Important Tips to Become a Good Java Developer

2. Python – 61,818 jobs

Python was developed by a Dutch programmer, Guido van Rossum. It can be considered as one of the fastest growing programming languages. Python has seen a growth of around 24% in terms of job postings with 61,000 job postings as compared to last year’s 46,000.

It’s a high-level object-oriented programming language that offers a wide range of third-party libraries and extensions to programmers. Developers also say Python is simple and easy to learn. This language is also used to decrease the time and cost spent on application maintenance.

Read: 10 Best Python Courses For Programmers and Developers

3. JavaScript – 38,018 jobs

JavaScript is the third most popular programming language in our list. It’s inspired by Java and developed by American technologist, Brendan Eich. This year JavaScript job postings haven’t seen much changes, but still managed to secure the third position.

Unlike other languages, JavaScript can’t be used to develop apps or applets. It’s fast and doesn’t need to be compiled before use. JavaScript enables our code to interact with the browser and can even change or update both HTML and CSS.

Also Read: Best Courses to Learn JavaScript Programming Online

4. C++ – 36,798 jobs

Though there are many programming languages available today, the power of C++ can’t be ignored. Developed by Danish computer scientist Bjarne Stroustrup, C++ is widely used for game development, firmware development, system development, client-server applications, drivers, etc. C++ is actually an advanced version of C, with object-oriented programming capabilities. Its popularity grew by 16.22% as compared to the last year’s job postings.

Read: 6 Best IDEs For C and C++ Programming Language

5. C# – 27,521 jobs

C# is popularly used for Windows program development under Microsoft’s proprietary .NET framework. It’s mainly used for implementing back-end services, and database applications. It’s a hybrid of C++ and C languages. If you talk about the numbers, C#’s job postings didn’t grow that much but it’s still one of the most demanded languages.

Read: Difference Between C, C++, Objective-C and C# Programming Language

6. PHP – 16,890 jobs

One of the most popular language used in web development, Hypertext Preprocessor or PHP may be losing its essence in recent years. It’s an open source scripting language developed by a Danish-Canadian programmer.

Though the community is working hard to provide support, competing with python and other newcomers seems difficult. PHP is commonly used to retrieve data from the database and use on web pages. Its job postings are increased by 2,000 as compared to last year.

Read: Is PHP a Scripting or a Programming Language?

I hope you have got an idea and be able to decide which programming language you should learn in 2019. Whatever language you choose, first try to build the base the learning fundamentals, then start attempting small problems and ultimately move to medium and large projects.

Visual Studio Code Keyboard Shortcut For Windows

Introduction

 
In this article, we will learn some Visual Studio Code keyboard shortcuts while working on a Windows machine. Visual Studio Code keyboard shortcuts are helpful to the developers in working faster and more efficiently and for boosting their working performance. Keyboard shortcuts are keys or combinations of keys that provide an alternative way to do something. These shortcuts can provide an easier and quicker method of using Visual Studio Code.
 
Visual Studio Code Keyboard Shortcut For Windows 
 
I have categorized all the shortcut keys into the following categories.
  • General Shortcuts
  • Basic Editing Shortcuts
  • Navigation Shortcuts
  • Toggle Tab Moves focusShortcuts
  • Multi-Cursor and selectionShortcuts
  • Rich Languages EditingShortcuts
  • Editor ManagementShortcuts
  • File ManagementShortcuts
  • DebugShortcuts
  • Integrated terminal Shortcuts
We can also check all shortcuts keys using the following command. 
 
  1. Ctrl+k Ctrl+S  
or like this -
 
Visual Studio Code Keyboard Shortcut For Windows
 
Visual Studio Code Keyboard Shortcut For Windows
 
General Shortcuts
 
Shortcut Key Descriptions
Ctrl+Shift+P, F1 Show Command Palette
Ctrl+P Quick Open, Go to File
Ctrl+Shift+N New window
Ctrl+Shift+W Close window
Ctrl+, User Settings
Ctrl+K Ctrl+S Keyboard Shortcuts
 
Basic Editing Shortcuts
 
Shortcut Key Descriptions
Ctrl+X Cut line
Ctrl+C Copy line
Alt+ ↑ / ↓ Move line up/down
Shift+Alt + ↓ / ↑ Copy line up/down
Ctrl+Shift+K Delete line
Ctrl+Enter Insert line below
Ctrl+Shift+Enter Insert line above
Ctrl+Shift+\ Jump to matching bracket
Ctrl+] / [ Indent/outdent line
Home / End Go to beginning/end of line
Ctrl+Home Go to beginning of file
Ctrl+End Go to end of file
Ctrl+↑ / ↓ Scroll line up/down
Alt+PgUp / PgDn Scroll page up/down
Ctrl+Shift+[ Fold (collapse) region
Ctrl+Shift+] Unfold (uncollapse) region
Ctrl+K Ctrl+[ Fold (collapse) all subregions
Ctrl+K Ctrl+] Unfold (uncollapse) all subregions
Ctrl+K Ctrl+0 Fold (collapse) all regions
Ctrl+K Ctrl+J Unfold (uncollapse) all regions
Ctrl+K Ctrl+C Add line comment
Ctrl+K Ctrl+U Remove line comment
Ctrl+/ Toggle line comment
Shift+Alt+A Toggle block comment
Alt+Z Toggle word wrap
 
Navigation Shortcuts
 
 
Shortcut Key Descriptions
Ctrl+T Show all Symbols
Ctrl+G Go to Line
Ctrl+P Go to File
Ctrl+Shift+O Go to Symbol
Ctrl+Shift+M Show Problems panel
F8 Go to the next error
Shift+F8 Go to previous error
Ctrl+Shift+Tab Navigate editor group history
Alt+ ← / → Go back / forward
Ctrl+M Toggle Tab moves the focus
 
Toggle Tab moves focus Shortcuts
 
Shortcut Key Descriptions
Ctrl+F Find
Ctrl+H Replace
F3 / Shift+F3 Find next/previous
Alt+Enter Select all occurences of Find match
Ctrl+D Add selection to next Find match
Ctrl+K Ctrl+D Move last selection to next Find match
Alt+C / R / W Toggle case-sensitive / regex / whole word
 
Multi-cursor and selection Shortcuts
 
Shortcut Key Descriptions
Alt+Click Insert cursor
Ctrl+Alt+ ↑ / ↓ Insert cursor above / below
Ctrl+U Undo last cursor operation
Shift+Alt+I Insert cursor at end of each line selected
Ctrl+I Select current line
Ctrl+Shift+L Select all occurrences of the current selection
Ctrl+F2 Select all occurrences of the current word
Shift+Alt+→ Expand selection
Shift+Alt+← Shrink selection
 
Editor Management Shortcuts
 
Shortcut Key Descriptions
Ctrl+F4, Ctrl+W Close editor
Ctrl+K F Close folder
Ctrl+\ Split editor
Ctrl+ 1 / 2 / 3 Focus into 1 st, 2nd or 3rd editor group
Ctrl+K Ctrl+ ←/→ Focus into previous/next editor group
Ctrl+Shift+PgUp / PgDn Move editor left/right
Ctrl+K ← / → Move active editor group
 
File Management Shortcuts
 
Shortcut Key Descriptions
Ctrl+N New File
Ctrl+O Open File
Ctrl+S Save
Ctrl+Shift+S Save
Ctrl+K S Save All
Ctrl+F4 Close
Ctrl+K Ctrl+W Close All
Ctrl+Shift+T Reopen closed editor
Ctrl+K Enter Keep preview mode editor open
Ctrl+Tab Open next
Ctrl+Shift+Tab Open previous
Ctrl+K P Copy path of an active file
Ctrl+K R Reveal active file in Explorer
Ctrl+K O Show active file in a new window/instance
 
Debug Shortcuts
 
Shortcut Key Descriptions
F9 Toggle breakpoint
F5 Start/Continue
Shift+F5 Stop
F11 / Shift+F11 Step into/out
F10 Step over
Ctrl+K Ctrl+I Show hover
 
Integrated Terminal Shortcuts 
 
Shortcut Key Descriptions
Ctrl+` Show integrated terminal
Ctrl+Shift+` Create a new terminal
Ctrl+C Copy selection
Ctrl+V Paste into an active terminal
Ctrl+↑ / ↓ Scroll up/down
Shift+PgUp / PgDn Scroll page up/down
Ctrl+Home / End Scroll to the top/bottom

5 Evergreen goals To guide technology organization

These 5 evergreen goals are a useful way to help technology organizations of all sizes make decisions, categorize work, allocate resources, and spur innovation and productivity without interfering with team-specific, time-boxed goals. Whether you’re leading through change or focusing your team, these evergreen goals (or your variations of them) might just be what you need to bring foundational consistency to your technology organization without slowing them down. Here’s our set of evergreen goals.

1. Reduce  Complexity

Some systems might be complex because the problems they address are complicated. Perhaps the complexity is justified. That said, it’s startling how much complexity is created unintentionally. This evergreen goal is focused on reducing accidental or unintentional complexity. Sometimes it’s created because of expediency, but often it’s the result of architecture that does not evolve properly. The end result is the same, however. You probably see this in some of your own systems as they become increasingly difficult to fix or improve in a timely manner without causing problems in other areas. Unintentionally complex systems are also difficult to secure, scale, move, and recover. I’ve seen this at startups as well as at long standing companies like Morningstar with lengthy histories of product development, acquisitions, and integration. This goal is not only about technology but is also about reducing complexity in the processes that drive how we we plan, work together, communicate, and hire.

2. Improve Product Completeness

Technology teams often cut corners in order to deliver promised functionality on schedule. Regardless of why or how that happens, it does. The purpose of this evergreen goal is to encourage teams to always think intentionally about product completeness. We challenge our teams to continually find ways to improve security, scalability, and resilience, for example, and not just ways to deliver new functionality. Completeness work is often very underappreciated until something terrible happens. Don’t wait until you experience a data breach, extended downtime, or an inability to scale before you think about product completeness. Be pragmatic, but don’t be foolish.

3. Increase  Uptime

Delivering a product (internal or otherwise) is one thing, but keeping it up and running is an operational challenge that is often an afterthought in many organizations. The purpose of this evergreen goal is to encourage teams to think about monitoring, alerting, logging, incident response, recoverability, and automation. This isn’t just about technology. It’s also about ensuring that operations processes are efficient, modern, updated, and focused on the customer. Identify and correct problems before your customers report them. They expect that from you.

4. Own Less Infrastructure

In this modern age of high quality public cloud infrastructure, it makes little sense for most companies to run their own data centers for most of their workloads. It’s rarely a business differentiator anymore. Obviously, this evergreen goal might only apply to you if you’re still running your own data centers, but also consider other infrastructure you might own. Do you have your own call center equipment, for example? It might be worth rethinking that. At Morningstar, we are in the middle of a multi-year cloud transformation and this goal is particularly important to us. The purpose of this goal is to encourage teams to find ways to reduce current infrastructure footprints so that we can continue to draw down our dependence on the infrastructure that we own and maintain.

5. Maximize Talent

The technology landscape is changing so quickly and access to rich web services is abundant. A quick look at any major cloud service provider reveals that they’ve moved well beyond infrastructure services into services that spur innovation and increase productivity. Look at all the services related to machine learning, for example. Hopefully, you’ve hired people not just for what they already know but also for their aptitude and desire for continuing education. The tendency for many companies is to hire from the outside without first considering modernizing the skill sets of people they already have in-house. The modern workforce expects companies to invest in professional development, so this evergreen goal to maximize talent is a constant reminder to do that. It benefits individuals, teams, and the overall business to re-skill in-house talent.

Takeaways

Remember though, that you cannot immediately change culture. You have to nurture and evolve it. Installing and promoting these evergreen goals is often like creating a new habit or lifestyle change. It requires commitment, persistence, repetition, and encouragement. Use the terminology and concepts in meetings, conversations, and presentations, and encourage others to do the same. Make the effort inclusive, sustained, and intentional. The overall purpose for these evergreen goals is to remove friction from your technology organization in order to spur innovation and increase productivity. Sometimes simple measures like these yield the most impressive results.

Cheap MLB Jerseys maillot de foot pas cher www.fotballdrakter.org