Sumedh Meshram

A Personal Blog

NAVIGATION - SEARCH

Healthcheck endpoints in C# in MVC projects using ASP.NET Core, and writing results to Azure Application Insights

Healthcheck endpoints in C# in MVC projects using ASP.NET Core, and writing results to Azure Application Insights

Every developer wants to build a system that never breaks, but in reality things go wrong. The best systems are built to expect that and handle problems, that rather than just silently failing.

Maybe your database becomes unavailable (e.g. runs out of hard disk space) and your failover doesn’t work – or maybe a third party web service that you depend on stops working.

Sometimes your application can be programmed to recover from things going wrong – here’s my post on The Polly Project to find out more about one way of doing that – but when there’s a catastrophic failure that you can’t recover from, you want to be alerted as soon as it happens, rather than hear from a customer.

And it’s kind to provide a way for your customers to find out about the health of your system. As an example, just check out the monitoring hub below from Postcodes.io – this is a great example of being transparent about key system metrics like service status, availability, performance, and latency.

postcode

MVC projects in ASP.NET Core have a built in feature to provide information on the health of your website. It’s really simple to add it to your site, and this instrumentation comes packaged as part of the default ASP.NET Core toolkit. There are also some neat extensions available on NuGet to format the data as JSON, add a nice dashboard for these healthchecks, and finally to push the outputs to Azure Application Insights. As I’ve been implementing this recently, I wanted to share with the community how I’ve done it.

Scott Hanselman has blogged about this previously, but there have been some updates since he wrote about this which I’ve included in my post.

Returning system health from an ASP.NET Core v2.2 website

Before I start – I’ve uploaded all the code to GitHub here so you can pull the project and try yourself. You’ll obviously need to update subscription keys, instrumentation keys and connection strings for databases etc.

Edit your MVC site’s Startup.cs file and add the line below to the ConfigureServices method:

services.AddHealthChecks();

And then add the line of code below to the Configure method.

app.UseHealthChecks("/healthcheck");

That’s it. Now your website has a URL available to tell whether it’s healthy or not. When I browse to my local test site at the URL below…

http://localhost:59658/healthcheck

..my site returns the word “Healthy”. (obviously your local test site’s URL will have a different port number, but you get the idea)

So this is useful, but it’s very basic. Can we amp this up a bit – let’s say want to see a JSON representation of this? Or what about our database status? Well fortunately, there’s a great series of libraries from Xabaril (available on GitHub here) which massively extend the core healthcheck functions.

Returning system health as JSON

First, install the AspNetCoreHealthChecks.UI NuGet package.

Install-Package AspNetCore.HealthChecks.UI

Now I can change the code in my StartUp.cs file’s Configure method to specify some more options.

The code below changes the response output to be JSON format, rather than just the single word “Healthy”.

app.UseHealthChecks("/healthcheck", new HealthCheckOptions
    {
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
    });

And as you can see in the image below, when I browse to the healthcheck endpoint I configured as “/healthcheck”, it’s now returning JSON:

healthcheck basic json

What about checking the health of other system components, like URIs, SQL Server or Redis?

Xabaril has got you covered here as well. For these three types of things, I just install the NuGet packages with the commands below:

Install-Package AspNetCore.HealthChecks.Uris
Install-Package AspNetCore.HealthChecks.Redis
Install-Package AspNetCore.HealthChecks.SqlServer

Check out the project’s ReadMe file for a full list of what’s available.

Then change the code in the ConfigureServices method in the project’s Startup.cs file.

services.AddHealthChecks()
        .AddSqlServer(connectionString: Configuration.GetConnectionString("SqlServerDatabase"),
                  healthQuery: "SELECT 1;",
                  name: "Sql Server", 
                  failureStatus: HealthStatus.Degraded)
        .AddRedis(redisConnectionString: Configuration.GetConnectionString("RedisCache"),
                        name: "Redis", 
                        failureStatus: HealthStatus.Degraded)
        .AddUrlGroup(new Uri("https://localhost:59658/Home/Index"),
                        name: "Base URL",
                        failureStatus: HealthStatus.Degraded);

Obviously in the example above, I have my connection strings stored in my appsettings.json file.

When I browse to the healthcheck endpoint now, I get much a richer JSON output.

health json

Can this information be displayed in a more friendly dashboard?

We don’t need to just show JSON or text output – Xabaril allows the creation of a clear and simple dashboard to display the health checks in a user friendly form. I updated my code in the StartUp.cs file – first of all, my ConfigureServices method now has the code below:

services.AddHealthChecks()
        .AddSqlServer(connectionString: Configuration.GetConnectionString("SqlServerDatabase"),
                  healthQuery: "SELECT 1;",
                  name: "Sql Server", 
                  failureStatus: HealthStatus.Degraded)
        .AddRedis(redisConnectionString: Configuration.GetConnectionString("RedisCache"),
                        name: "Redis", 
                        failureStatus: HealthStatus.Degraded)
        .AddUrlGroup(new Uri("https://localhost:59658/Home/Index"),
                        name: "Base URL",
                        failureStatus: HealthStatus.Degraded);
        
services.AddHealthChecksUI(setupSettings: setup =>
{
    setup.AddHealthCheckEndpoint("Basic healthcheck", "https://localhost:59658/healthcheck");
});

And my Configure method also has the code below.

app.UseHealthChecks("/healthcheck", new HealthCheckOptions
    {
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
    });
 
app.UseHealthChecksUI();

Now I can browse to a new endpoint which presents the dashboard below:

http://localhost:59658/healthchecks-ui#/healthchecks

health default ui
And if you don’t like the default CSS, you can configure it to use your own. Xabaril has an example of a css file to include here, and I altered my Configure method to the code below which uses this CSS file.

app.UseHealthChecks("/healthcheck", new HealthCheckOptions
    {
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
    })
    .UseHealthChecksUI(setup =>
    {
        setup.AddCustomStylesheet(@"wwwroot\css\dotnet.css");
    });
 
app.UseHealthChecksUI();

And now the website is styled slightly differently, as you can see in the image below.

health styled ui

What happens when a system component fails?

Let’s break something. I’ve turned off SQL Server, and a few seconds later the UI automatically refreshes to show the overall system health status has changed – as you can see, the SQL Server check has been changed to a status of “Degraded”.

health degrades

And this same error appears in the JSON message.

health degraded json

Can I monitor these endpoints in Azure Application Insights?

Sure – but first make sure your project is configured to use Application Insights.

If you’re not familiar with Application Insights and .NET Core applications, check out some more information here.

If it’s not set up already, you can add the Application Insights Telemetry by right clicking on your project in the Solution Explorer window of VS2019, selecting “Add” from the context menu, and choosing “Application Insights Telemetry…”. This will take you through the wizard to configure your site to use Application Insights.

aitel

Once that’s done, I changed the code in my Startup.cs file’s ConfigureServices method to explicitly push to Application Insights, as shown in the snippet below:

services.AddHealthChecks()
        .AddSqlServer(connectionString: Configuration.GetConnectionString("SqlServerDatabase"),
                  healthQuery: "SELECT 1;",
                  name: "Sql Server", 
                  failureStatus: HealthStatus.Degraded)
        .AddRedis(redisConnectionString: Configuration.GetConnectionString("RedisCache"),
                        name: "Redis", 
                        failureStatus: HealthStatus.Degraded)
        .AddUrlGroup(new Uri("https://localhost:44398/Home/Index"),
                        name: "Base URL",
                        failureStatus: HealthStatus.Degraded)
        .AddApplicationInsightsPublisher();
        
services.AddHealthChecksUI(setupSettings: setup =>
{
    setup.AddHealthCheckEndpoint("Basic healthcheck", "https://localhost:44398/healthcheck");
});

Now I’m able to view these results in the Application Insights – the way I did this was:

  • First browse to portal.azure.com and click on the “Application Insights” resource which has been created for your web application (it’ll probably be top of the recently created resources).
  • Once that Application Insights blade opens, click on the “Metrics” menu item (highlighted in the image below):

app insights metrics

When the chart windows opens – it’ll look like the image below – click on the “Metric Namespace” dropdown and select the “azure.applicationinsights” value (highlighted below).

app insights custom metric

Once you’ve selected the namespace to plot, choose the specific metric from that namespace. I find that the “AspNetCoreHealthCheckStatus” metric is most useful to me (as shown below).

app insights status

And finally I also choose to display the “Min” value of the status (as shown below), so if anything goes wrong the value plotted will be zero.

app insights aggregation

After this, you’ll have a graph displaying availaility information for your web application. As you can see in the graph below, it’s pretty clear when I turned on my SQL Server instance again so the application health went from a overall health status of ‘Degraded’ to ‘Healthy’.

application insights

Wrapping up

I’ve covered a lot of ground in this post – from .NET Core 2.2’s built in HealthCheck extensions, building on that to use community content to check other site resources like SQL Server and Redis, adding a helpful dashboard, and finally pushing results to Azure Application Insights. I’ve also created a bootstrapper project on GitHub to help anyone else interested in getting started with this – I hope it helps you.

5 DevOps tools you should know in 2019

5 DevOps tools you should know in 2019

Written by Marius Rimkus
on July 22, 2019

DevOps culture is now an integral part of every tech savvy business and plays a role in many business processes, ranging from project planning to software delivery. As cloud services are prevailing today, the requirement of related supplementary services is growing rapidly. DevOpstechnologies are increasing as well, so how one should choose the right tools to automate his work? There are a lot of opinions, but I will share the list of DevOps technologies I find the most important to master in 2019.

 

 

ansible logoAnsible

Ansible is a quite simple software provisioning, configuration management and application deployment tool, which ensures faster time-to-market for your applications. No matter if you are a one man company or an enterprise, you can automate orchestration, cloud provisioning, computing machines deployment and other tasks. I like Ansible because it is not as complex as Puppet or Chef, but speeds up productivity just as well.

  • Ansible playbooks are written in YAML, which is one of the easiest data-serialization languages for creating configuration files.
  • It’s fast, performs all its functions over SSH and doesn't require agent installation.
  • It allows you to create groups of servers, describe how these should be configured and what actions should be performed on these machines.

 

Jenkins logo Jenkins

A lot of DevOps engineers call Jenkins the best CI/CD tool available in the market, since it’s incredibly useful. Jenkins is an automation server that is written in Java and is used to report changes, conduct live testing and distribute code across multiple machines. As Jenkins has a built-in GUI and over 1000 plugins to support building and testing your application, it is considered a really powerful, yet easy to use tool. Thanks to these plugins, Jenkins integrates well with practically every other instrument in the continuous integration and continuous delivery toolchain.

  • Easy to install and a lot of support available from community.
  • 1000+ plugins available and easy to create your own, if needed.
  • It can be used to publish results and send email notifications.

 

Docker logo Docker

Docker is a software containerization platform that allows DevOps to build, ship, and run distributed processes within containers. This gives developers the ability to create predictable environments that are isolated from the rest of the applications and can be run anywhere. Containers are isolated but share the same OS kernel. This way you get to use hardware resources more efficiently compared to virtual machines. 

Each container can hold a single process, like web server or database management system. You can create a cluster of containers distributed across different nodes to have your application up and running in both, load balancing and high availability, modes. Containers can communicate on a private network, as you most probably want to keep some of your application parts private for security purposes. Simply expose your web server to the Internet and you are good to go.

What I like most is that you can install Docker on your computer to run containers locally to make some ad-hoc software tests without installing its dependencies globally. When you are done, you simply terminate your Docker container and your computer is as clean as new.

  • Build once, run anywhere! You can package an application from your laptop and run it unmodified on any public/private cloud or bare metal server.
  • Containers are lightweight and fast.
  • Docker Hub offers many official and community-built public Docker images.
  • Separating different components of a large application into containers have security benefits: if one container is compromised, others remain unaffected.

 

Kubernetes logo Kubernetes

While Docker allows developers to build, ship and run applications in containers easily, Kubernetes makes running containers in a cluster as easy as ever. You can automatically deploy, scale, monitor and manage your cloud-native application with Kubernetes. It is a powerful orchestrator that allows you to manage communication between containerized components, known as pods, and coordinate them as a cluster. 

Kubernetes has now become the heart of a microservices application. The ecosystem around it is expanding by the minute with Cloud Native Computing Foundation ensuring its future success. There are now many additional observability, networking and distributed data storage services that complement Kubernetes in building a loosely coupled distributed system that is resilient, manageable and observable.

  • Open-source orchestrator.
  • Easy container management.
  • Horizontal autoscaling - if you get high loads, you can replicate your pods and balance the load across them to avoid downtime.
  • Self-healing, Automated Rollouts and Rollbacks - if something goes wrong, you can automatically replace, restart, reschedule your containers or rollout/rollback to the desired state of the containerized application.
  • Service Discovery - Kubernetes uses unique IP addresses and can put a set of containers behind a single DNS name. This allows you easily track and identify your across the cluster.

 

Rabbitmq logo RabbitMQ

A great messaging and queuing tool which you can use for applications that runs on most operating systems. Managing queues, exchanges and routing with it is a breeze. Even if you have elaborate configuration to be built, it’s relatively easy to do so, since the tool is really well-documented. You can stream a lot of different high-performance processes and avoid system crashes through a friendly user interface. It ‘s a durable and robust messaging broker that is worth your attention. As RabbitMQ developers like to say, it’s "messaging that just works".

What is Helm and why you should love it?

Helm is the first application package manager running atop Kubernetes. It allows describing the application structure through convenient helm-charts and managing it with simple commands.

 

Why is Helm important? Because it’s a huge shift in the way the server-side applications are defined, stored and managed. Adoption of Helm might well be the key to mass adoption of microservices, as using this package manager simplifies their management greatly.

What is Helm and why you should love it

Why are microservices so important? They have quite a few uses:

  • When there are several microservices instead of a monolithic application, each microservice can be managed, updated and scaled individually
  • Issues with one microservice do not affect the functionality of other components of the application
  • New application can be easily composed of existing loosely-coupled microservices

Of course, Helm is not the unique package manager, nor is it perfect. However, the project is now being actively developed and grows a passionate community that appreciates the benefits of using Helm charts for software development.

Helm benefits and flaws

Unlike Homebrew or Aptitude desktop package managers, or Azure Resource Manager templates (ARMs) / Amazon Machine Images (AMIs) that are run on a single server, Helm charts are built atop Kubernetes and benefit from its cluster architecture. The main benefit of this approach is the ability to consider scalability from the start. The charts of all the images used by Helm are stored in a registry called Helm Workspace, so the DevOps teams can search them and add to their projects with ease.

What is Helm and why you should love it 1

For example, you need to launch a website built with WordPress, Joomla, Django or any other CMS. You expect the website to receive millions of daily visitors from the day one and you must make sure such huge numbers of connections will not lead to freezes or service unavailability.

Using virtualization capabilities ensures scaling, yes. Just keep in mind that an AMI, ARM (or a Docker container for that matter) you use to launch the app will be dependent on the Virtual Machine it is stored on and will be able to scale only the way the virtual machines are scaled — by adding more resources to the pool.

With Helm, we have quite another picture. The application can be composed of clearly defined microservices and we can scale only the ones we need to scale, adding more Kubernetes nodes and pods to the cluster. Instead of working with a holistic image and growing all the resources, you operate a set of images and scale them independently.

The problems begin when you want to launch a new instance of an application that runs, let’s say, 50 microservices. Starting and combining them all will be a laborious and error-prone task. However, with Helm, all you need to know is the name of the charts for the images responsible. Launching a new instance is the question of executing the corresponding Helm chart.

The only significant issue with Helm so far was the fact that when 2 helm charts have the same labels they interfere with each other and impair the underlying resources. This means it’s better to compose a new image for the project than adding a single Helm chart to it and affects the rollbacks too. However, the community has found workarounds for the issue and we are sure it will be removed for good in the future versions of the tool.

Final thoughts on the future of Helm

We are sure Kubernetes is the future of container orchestration in the cloud, and Helm is the way to use Kubernetes most efficiently. Of course, the DevOps team can do the same using standard kubectl commands, yet working with Helm provides the ability to quickly define, cleanly manage and easily deploy the applications. Thus said, the Kubernetes + Helm duo can (and must) become the basic toolset for any DevOps specialist in the years to come, namely a helm to navigate the cloud and deliver the containers safely.

Artificial Intelligence in Software Development and Testing

According to Gartner, artificial intelligence will be omnipresent in all spheres of technology and will successfully make its presence prominent among the top investment priority of CIOs by 2020. Going by the figures of the market research firm, the worldwide scope for artificial intelligence in 2019 is approximately $6.36 billion in North America.

Technical masters like Amazon, Facebook, Google, and many others spend a huge sum of money on acquiring AI in software.

AI emerged as an enterprise technology and has changed the outlook of everything, including software development and software testing. It is, therefore, important that we take a minute to look into the role of artificial intelligence in software development and testing.

Higher Level of Precision

It is natural for humans to make errors. Even highly skilled testers sometimes end up making mistakes while performing annual testing. With automated testing, similar steps can be executed with precision every time the task of testing is undertaken and never miss an opportunity to notify specific outcomes. Testers are exempted from ongoing manual examination and they can have a more significant proportion of time to develop new automated software tests and manage chic properties.

Artificial intelligence can help to overcome the drawbacks of annual testing. It is practically unsustainable for leading software or quality assurance (QA) segments to perform a well-managed web app test with more than thousands of users. With the help of automated testing, the user can trigger tens, hundreds, or thousands of optical group of users who can communicate with a network, software, or web-based app.

Massive Support for Developers as Well as Testers

Developers can utilize the shared tests conducted by the computing device to monitor errors instantly before sending it for quality assurance. These tests can function automatically as and when the source code alterations are examined based on which the squad or the app builder can be notified accordingly if a test result turns out to be unsuccessful. Different properties like these help in securing a time for developers and boosts up their self-confidence.

Leveraging the Whole Test Scope

In software testing, with the help of artificial intelligence, the user can leverage the complete coverage and depth of tests, thereby leading to massive enhancement in software quality. Artificial intelligence-driven software testing enables looking into the storage capacity and document content, internal strategy states, and data tables to ascertain whether the software is acting as it should or not. On the whole, test automation can perform more than a thousand various test cases in each test run, offering a scope that would never have been possible through manual testing.

Less Time-Consuming and Helps in Quick Marketing

With the help of software tests being replicated, every time a source code is altered, repetitive manual tests can prove to be time-consuming and tremendously expensive.

On the other hand, once developed, machine learning and testing together can be performed continuously without the need to incur any extra expenses.

The total time taken for software testing can be reduced from two or three days to a few hours, which indirectly helps to save money.

To Wrap Up

Integrating artificial intelligence (AI) with software testing and software development can help to build a society where software can be swiftly examined, diagnosed, and modified.

Artificial intelligence testing will permit high-quality engineering and will decrease the total time taken for examination and development. As a result, it will help to secure time, money, and resources; while allowing testers to pay attention to performing prime activities such as launching quality software.

 

Build A Serverless Function

In this tutorial, you’ll build and publish a serverless function that generates QR codes, using Cloudflare Workers.

Demo

This tutorial makes use of Wrangler, our command-line tool for generating, building, and publishing projects on the Cloudflare Workers platform. If you haven’t used Wrangler, we recommend checking out the “Installing the CLI” part of our Quick Start guide, which will get you set up with Wrangler, and familiar with the basic commands.

If you’re interested in building and publishing serverless functions, this is the guide for you! No prior experience with serverless functions or Cloudflare Workers is assumed.

One more thing before you start the tutorial: if you just want to jump straight to the code, we’ve made the final version of the codebase available on GitHub. You can take that code, customize it, and deploy it for use in your own projects. Happy coding!

Prerequisites

To publish your QR Code Generator function to Cloudflare Workers, you’ll need a few things:

  • A Cloudflare account, and access to the API keys for that account
  • A Wrangler installation running locally on your machine, and access to the command-line

If you don’t have those things quite yet, don’t worry. We’ll walk through each of them and make sure we’re ready to go, before you start creating your application.

You’ll need to get your Cloudflare API keys to deploy code to Cloudflare Workers: see “Finding your Cloudflare API keys” for a brief guide on how to find them.

Generate

Cloudflare’s command-line tool for managing Worker projects, Wrangler, has great support for templates – pre-built collections of code that make it easy to get started writing Workers. We’ll make use of the default JavaScript template to start building your project.

In the command line, generate your Worker project, using Wrangler’s worker-template, and pass the project name “qr-code-generator”:

wrangler generate qr-code-generator
cd qr-code-generator

Wrangler templates are just Git repositories, so if you want to create your own templates, or use one from our Template Gallery, there’s a ton of options to help you get started.

Cloudflare’s worker-template includes support for building and deploying JavaScript-based projects. Inside of your new qr-code-generator directory, index.js represents the entry-point to your Cloudflare Workers application.

All Cloudflare Workers applications start by listening for fetch events, which are fired when a client makes a request to a Workers route. When that request occurs, you can construct responses and return them to the user. This tutorial will walk you through understanding how the request/response pattern works, and how we can use it to build fully-featured applications.

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

/**
 * Fetch and log a request
 * @param {Request} request
 */
async function handleRequest(request) {
  return new Response('Hello worker!', { status: 200 })
}

In your default index.js file, we can see that request/response pattern in action. The handleRequest constructs a new Response with the body text “Hello worker”, as well as an explicit status code of 200.

When a fetch event comes into the worker, the script uses event.respondWith to return that new response back to the client. This means that your Cloudflare Worker script will serve new responses directly from Cloudflare’s cloud network: instead of continuing to the origin, where a standard server would accept requests, and return responses, Cloudflare Workers allows you to respond quickly and efficiently by constructing responses directly on the edge.

Build

Any project you publish to Cloudflare Workers can make use of modern JS tooling like ES modules, NPM packages, and async/await functions to put together your application. In addition, simple serverless functions aren’t the only thing you can publish on Cloudflare Workers: you can build full applications using the same tooling and process as what we’ll be building today.

The QR code generator we’ll build in this tutorial will be a serverless function that runs at a single route and receives requests. Given text sent inside of that request (such as URLs, or strings), the function will encode the text into a QR code, and serve the QR code as a PNG response.

Handling requests

Currently, our Workers function receives requests, and returns a simple response with the text “Hello worker!”. To handle data coming in to our serverless function, check if the incoming request is a POST:

async function handleRequest(request) {
  if (request.method === 'POST') {
    return new Response('Hello worker!', { status: 200 })
  }
}

Currently, if an incoming request isn’t a POST, response will be undefined. Since we only care about incoming POST requests, populate response with a new Responsewith a 500 status code, if the incoming request isn’t a POST:

async function handleRequest(request) {
  let response
  if (request.method === 'POST') {
    response = new Response('Hello worker!', { status: 200 })
  } else {
    response = new Response('Expected POST', { status: 500 })
  }
  return response
}

With the basic flow of handleRequest established, it’s time to think about how to handle incoming valid requests: if a POST request comes in, the function should generate a QR code. To start, move the “Hello worker!” response into a new function, generate, which will ultimately contain the bulk of our function’s logic:

const generate = async request => {
  return new Response('Hello worker!', { status: 200 })
}

async function handleRequest(request) {
  // ...
  if (request.method === 'POST') {
    response = await generate(request)
  // ...
}

Building a QR Code

All projects deployed to Cloudflare Workers support NPM packages, which makes it incredibly easy to rapidly build out a lot of functionality in your serverless functions. The qr-image package is a great way to take text, and encode it into a QR code, with support for generating the codes in a number of file formats (such as PNG, the default, and SVG), and configuring other aspects of the generated QR code. In the command-line, install and save qr-image to your project’s package.json:

npm install --save qr-image

In index.js, require the qr-image package as the variable qr. In the generatefunction, parse the incoming request as JSON, using request.json, and use the text to generate a QR code using qr.imageSync:

const qr = require('qr-image')

const generate = async request => {
  const body = await request.json()
  const text = body.text
  const qr_png = qr.imageSync(text || 'https://workers.dev')
}

By default, the QR code is generated as a PNG. Construct a new instance of Response, passing in the PNG data as the body, and a Content-Type header of image/png: this will allow browsers to properly parse the data coming back from your serverless function, as an image:

const generate = async request => {
  // ...
  return new Response(qr_png, { headers })
}

With the generate function filled out, we can simply wait for the generation to finish in handleRequest, and return it to the client as response:

async function handleRequest(request) {
  // ...
  if (request.method === 'POST') {
    response = await generate(request)
  // ...
}

Testing In a UI

The serverless function will work if a user sends a POST request to a route, but it would be great to also be able to test it with a proper interface. At the moment, if any request is received by your function that isn’t a POST, a 500 response is returned. The new version of handleRequest should return a new Response with a static HTML body, instead of the 500 error:

const landing = `
<h1>QR Generator</h1>
<p>Click the below button to generate a new QR code. This will make a request to your serverless function.</p>
<input type="text" id="text" value="https://workers.dev"></input>
<button onclick='generate()'>Generate QR Code</button>
<p>Check the "Network" tab in your browser's developer tools to see the generated QR code.</p>
<script>
  function generate() {
    fetch(window.location.pathname, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ text: document.querySelector("#text").value })
    })
  }
</script>
`

async function handleRequest(request) {
  let response
  if (request.method === 'POST') {
    response = await generate(request)
  } else {
    response = new Response(landing, { headers: { 'Content-Type': 'text/html' } })
  }
  return response
}

The landing variable, which is a static HTML string, sets up an input tag and a corresponding button, which calls the generate function. This function will make an HTTP POST request back to your serverless function, allowing you to see the corresponding QR code image data inside of your browser’s network inspector:

With that, your serverless function is complete! The full version of the code looks like this:

const qr = require('qr-image')

const generate = async request => {
  const { text } = await request.json()
  const headers = { 'Content-Type': 'image/png' }
  const qr_png = qr.imageSync(text || 'https://workers.dev')
  return new Response(qr_png, { headers })
}

const landing = `
<h1>QR Generator</h1>
<p>Click the below button to generate a new QR code. This will make a request to your serverless function.</p>
<input type="text" id="text" value="https://workers.dev"></input>
<button onclick='generate()'>Generate QR Code</button>
<p>Check the "Network" tab in your browser's developer tools to see the generated QR code.</p>
<script>
  function generate() {
    fetch(window.location.pathname, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ text: document.querySelector("#text").value })
    })
  }
</script>
`

async function handleRequest(request) {
  let response
  if (request.method === 'POST') {
    response = await generate(request)
  } else {
    response = new Response(landing, { headers: { 'Content-Type': 'text/html' } })
  }
  return response
}

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

Publish

And with that, you’re finished writing the code for the QR code serverless function, on Cloudflare Workers!

Wrangler has built-in support for bundling, uploading, and releasing your Cloudflare Workers application. To do this, we’ll run wrangler publish, which will build and publish your code:

Publish

Resources

In this tutorial, you built and published a serverless function to Cloudflare Workers for generating QR codes. If you’d like to see the full source code for this application, you can find it on GitHub.

If you enjoyed this tutorial, we encourage you to explore our other tutorials for building on Cloudflare Workers:

If you want to get started building your own projects, check out the quick-start templates we’ve provided in our Template Gallery.

Cheap MLB Jerseys maillot de foot pas cher www.fotballdrakter.org