Sumedh Meshram

A Personal Blog

What is CI/CD?

Continuous integration (CI) and continuous delivery (CD) are common terms in software production. But do you know what they mean?

What does "continuous" mean?

Continuous is used to describe many different processes that follow the practices I describe here. It doesn't mean "always running." It does mean "always ready to run." In the context of creating software, it also includes several core concepts/best practices. These are:

  • Frequent releases: The goal behind continuous practices is to enable delivery of quality software at frequent intervals. Frequency here is variable and can be defined by the team or company. For some products, once a quarter, month, week, or day may be frequent enough. For others, multiple times a day may be desired and doable. Continuous can also take on an "occasional, as-needed" aspect. The end goal is the same: Deliver software updates of high quality to end users in a repeatable, reliable process. Often this may be done with little to no interaction or even knowledge of the users (think device updates).

  • Automated processes: A key part of enabling this frequency is having automated processes to handle nearly all aspects of software production. This includes building, testing, analysis, versioning, and, in some cases, deployment.

  • Repeatable: If we are using automated processes that always have the same behavior given the same inputs, then processing should be repeatable. That is, if we go back and enter the same version of code as an input, we should get the same set of deliverables. This also assumes we have the same versions of external dependencies (i.e., other deliverables we don't create that our code uses). Ideally, this also means that the processes in our pipelines can be versioned and re-created (see the DevOps discussion later on).

  • Fast processing: "Fast" is a relative term here, but regardless of the frequency of software updates/releases, continuous processes are expected to process changes from source code to deliverables in an efficient manner. Automation takes care of much of this, but automated processes may still be slow. For example, integrated testing across all aspects of a product that takes most of the day may be too slow for product updates that have a new candidate release multiple times per day.

What is a "continuous delivery pipeline"?

 

The different tasks and jobs that handle transforming source code into a releasable product are usually strung together into a software "pipeline" where successful completion of one automatic process kicks off the next process in the sequence. Such pipelines go by many different names, such as continuous delivery pipeline, deployment pipeline, and software development pipeline. An overall supervisor application manages the definition, running, monitoring, and reporting around the different pieces of the pipeline as they are executed.

 

How does a continuous delivery pipeline work?

The actual implementation of a software delivery pipeline can vary widely. There are a large number and variety of applications that may be used in a pipeline for the various aspects of source tracking, building, testing, gathering metrics, managing versions, etc. But the overall workflow is generally the same. A single orchestration/workflow application manages the overall pipeline, and each of the processes runs as a separate job or is stage-managed by that application. Typically, the individual "jobs" are defined in a syntax and structure that the orchestration application understands and can manage as a workflow.

Jobs are created to do one or more functions (building, testing, deploying, etc.). Each job may use a different technology or multiple technologies. The key is that the jobs are automated, efficient, and repeatable. If a job is successful, the workflow manager application triggers the next job in the pipeline. If a job fails, the workflow manager alerts developers, testers, and others so they can correct the problem as quickly as possible. Because of the automation, errors can be found much more quickly than by running a set of manual processes. This quick identification of errors is called "fail fast" and can be just as valuable in getting to the pipeline's endpoint.

What is meant by "fail fast"?

One of a pipeline's jobs is to quickly process changes. Another is to monitor the different tasks/jobs that create the release. Since code that doesn't compile or fails a test can hold up the pipeline, it's important for the users to be notified quickly of such situations. Fail fast refers to the idea that the pipeline processing finds problems as soon as possible and quickly notifies users so the problems can be corrected and code resubmitted for another run through the pipeline. Often, the pipeline process can look at the history to determine who made that change and notify the person and their team.

Do all parts of a continuous delivery pipeline have to be automated?

Nearly all parts of the pipeline should be automated. For some parts, it may make sense to have a spot for human intervention/interaction. An example might be for user-acceptance testing (having end users try out the software and make sure it does what they want/expect). Another case might be deployment to production environments where groups want to have more human control. And, of course,human intervention is required if the code isn't correct and breaks.

With that background on the meaning of continuous, let's look at the different types of continuous processing and what each means in the context of a software pipeline.

What is continuous integration?

Continuous integration (CI) is the process of automatically detecting, pulling, building, and (in most cases) doing unit testing as source code is changed for a product. CI is the activity that starts the pipeline (although certain pre-validations—often called "pre-flight checks"—are sometimes incorporated ahead of CI).

The goal of CI is to quickly make sure a new change from a developer is "good" and suitable for further use in the code base.

How does continuous integration work?

The basic idea is having an automated process "watching" one or more source code repositories for changes. When a change is pushed to the repositories, the watching process detects the change, pulls down a copy, builds it, and runs any associated unit tests.

How does continuous integration detect changes?

These days, the watching process is usually an application like Jenkins that also orchestrates all (or most) of the processes running in the pipeline and monitors for changes as one of its functions. The watching application can monitor for changes in several different ways. These include:

  • Polling: The monitoring program repeatedly asks the source management system, "Do you have anything new in the repositories I'm interested in?" When the source management system has new changes, the monitoring program "wakes up" and does its work to pull the new code and build/test it.

  • Periodic: The monitoring program is configuredto periodically kick off a build regardless of whether there are changes or not. Ideally, if there are no changes, then nothing new is built, so this doesn't add much additional cost.

  • Push: This is the inverse of the monitoring application checking with the source management system. In this case, the source management system is configured to "push out" a notification to the monitoring application when a change is committed into a repository. Most commonly, this can be done in the form of a "webhook"—a program that is "hooked" to run when new code is pushed and sends a notification over the internet to the monitoring program. For this to work, the monitoring program must have an open port that can receive the webhook information over the internet.

What are "pre-checks" (aka pre-flight checks)?

Additional validations may be done before code is introduced into the source repository and triggers continuous integration. These follow best practices such as test builds and code reviews. They are usually built into the development process before the code is introduced in the pipeline. But some pipelines may also include them as part of their monitored processes or workflows.

As an example, a tool called Gerrit allows for formal code reviews, validations, and test builds after a developer has pushed code but before it is allowed into the (Git remote) repository. Gerrit sits between the developer's workspace and the Git remote repository. It "catches" pushes from the developer and can do pass/fail validations to ensure they pass before being allowed to make it into the repository. This can include detecting the proposed change and kicking off a test build (a form of CI). It also allows for groups to do formal code reviews at that point. In this way, there is an extra measure of confidence that the change will not break anything when it is merged into the codebase.

What are "unit tests"?

Unit tests (also known as "commit tests") are small, focused tests written by developers to ensure new code works in isolation. "In isolation" here means not depending on or making calls to other code that isn't directly accessible nor depending on external data sources or other modules. If such a dependency is required for the code to run, those resources can be represented by mocks. Mocks refer to using a code stub that looks like the resource and can return values but doesn't implement any functionality.

In most organizations, developers are responsible for creating unit tests to prove their code works. In fact, one model (known as test-driven development [TDD]) requires unit tests to be designed first as a basis for clearly identifying what the code should do. Because such code changes can be fast and numerous, they must also be fast to execute.

As they relate to the continuous integration workflow, a developer creates or updates the source in their local working environment and uses the unit tests to ensure the newly developed function or method works. Typically, these tests take the form of asserting that a given set of inputs to a function or method produces a given set of outputs. They generally test to ensure that error conditions are properly flagged and handled. Various unit-testing frameworks, such as JUnit for Java development, are available to assist.

What is continuous testing?

Continuous testing refers to the practice of running automated tests of broadening scope as code goes through the CD pipeline. Unit testing is typically integrated with the build processes as part of the CI stage and focused on testing code in isolation from other code interacting with it.

Beyond that, there are various forms of testing that can/should occur. These can include:

  • Integration testing validates that groups of components and services all work together.

  • Functional testing validates the result of executing functions in the product are as expected.

  • Acceptance testing measures some characteristic of the system against acceptable criteria. Examples include performance, scalability, stress, and capacity.

All of these may not be present in the automated pipeline, and the lines between some of the different types can be blurred. But the goal of continuous testing in a delivery pipeline is always the same: to prove by successive levels of testing that the code is of a quality that it can be used in the release that's in progress. Building on the continuous principle of being fast, a secondary goal is to find problems quickly and alert the development team. This is usually referred to as fail fast.

Besides testing, what other kinds of validations can be done against code in the pipeline?

In addition to the pass/fail aspects of tests, applications exist that can also tell us the number of source code lines that are exercised (covered) by our test cases. This is an example of a metric that can be computed across the source code. This metric is called code-coverage and can be measured by tools (such as JaCoCo for Java source).

Many other types of metrics exist, such as counting lines of code, measuring complexity, and comparing coding structures against known patterns. Tools such as SonarQube can examine source code and compute these metrics. Beyond that, users can set thresholds for what kind of ranges they are willing to accept as "passing" for these metrics. Then, processing in the pipeline can be set to check the computed values against the thresholds, and if the values aren't in the acceptable range, processing can be stopped. Applications such as SonarQube are highly configurable and can be tuned to check only for the things that a team is interested in.

What is continuous delivery?

Continuous delivery (CD) generally refers to the overall chain of processes (pipeline) that automatically gets source code changes and runs them through build, test, packaging, and related operations to produce a deployable release, largely without any human intervention.

The goals of CD in producing software releases are automation, efficiency, reliability, reproducibility, and verification of quality (through continuous testing).

CD incorporates CI (automatically detecting source code changes, executing build processes for the changes, and running unit tests to validate), continuous testing (running various kinds of tests on the code to gain successive levels of confidence in the quality of the code), and (optionally) continuous deployment (making releases from the pipeline automatically available to users).

How are multiple versions identified/tracked in pipelines?

Versioning is a key concept in working with CD and pipelines. Continuous implies the ability to frequently integrate new code and make updated releases available. But that doesn't imply that everyone always wants the "latest and greatest." This may be especially true for internal teams that want to develop or test against a known, stable release. So, it is important that the pipeline versions objects that it creates and can easily store and access those versioned objects.

The objects created in the pipeline processing from the source code can generally be called artifacts. Artifacts should have versions applied to them when they are built. The recommended strategy for assigning version numbers to artifacts is called semantic versioning. (This also applies to versions of dependent artifacts that are brought in from external sources.)

Semantic version numbers have three parts: major, minor, and patch. (For example, 1.4.3 reflects major version 1, minor version 4, and patch version 3.) The idea is that a change in one of these parts represents a level of update in the artifact. The major version is incremented only for incompatible API changes. The minor version is incremented when functionalityis added in a backward-compatible manner. And the patch version is incremented when backward-compatible bug fixes are made. These are recommended guidelines, but teams are free to vary from this approach, as long as they do so in a consistent and well-understood manner across the organization. For example, a number that increases each time a build is done for a release may be put in the patch field.

How are artifacts "promoted"?

Teams can assign a promotion "level" to artifacts to indicate suitability for testing, production, etc. There are various approaches. Applications such as Jenkins or Artifactory can be enabled to do promotion. Or a simple scheme can be to add a label to the end of the version string. For example, -snapshot can indicate the latest version (snapshot) of the code was used to build the artifact. Various promotion strategies or tools can be used to "promote" the artifact to other levels such as -milestone or -production as an indication of the artifact's stability and readiness for release.

How are multiple versions of artifacts stored and accessed?

Versioned artifacts built from source can be stored via applications that manage "artifact repositories." Artifact repositories are like source management for built artifacts. The application (such as Artifactory or Nexus) can accept versioned artifacts, store and track them, and provide ways for them to be retrieved.

Pipeline users can specify the versions they want to use and have the pipeline pull in those versions.

What is continuous deployment?

Continuous deployment (CD) refers to the idea of being able to automatically take a release of code that has come out of the CD pipeline and make it available for end users. Depending on the way the code is "installed" by users, that may mean automatically deploying something in a cloud, making an update available (such as for an app on a phone), updating a website, or simply updating the list of available releases.

An important point here is that just because continuous deployment can be done doesn't mean that every set of deliverables coming out of a pipeline is always deployed. It does mean that, via the pipeline, every set of deliverables is proven to be "deployable." This is accomplishedin large part by the successive levels of continuous testing (see the section on Continuous Testing in this article).

Whether or not a release from a pipeline run is deployed may be gated by human decisions and various methods employed to "try out" a release before fully deploying it.

What are some ways to test out deployments before fully deploying to all users?

Since having to rollback/undo a deployment to all users can be a costly situation (both technically and in the users' perception), numerous techniques have been developed to allow "trying out" deployments of new functionality and easily "undoing" them if issues are found. These include:

Blue/green testing/deployments

In this approach to deploying software, two identical hosting environments are maintained — a blue one and a green one. (The colors are not significant and only serves as identifers.) At any given point, one of these is the production deployment and the other is the candidate deployment.

In front of these instances is a router or other system that serves as the customer “gateway” to the product or application. By pointing the router to the desired blue or green instance, customer traffic can be directed to the desired deployment. In this way, swapping out which deployment instance is pointed to (blue or green) is quick, easy, and transparent to the user.

When a new release is ready for testing, it can be deployed to the non-production environment. After it’s been tested and approved, the router can be changed to point the incoming production traffic to it (so it becomes the new production site). Now the hosting environment that was production is available for the next candidate.

Likewise, if a problem is found with the latest deployment and the previous production instance is still deployed in the other environment, a simple change can point the customer traffic back to the previous production instance — effectively taking the instance with the problem “offline” and rolling back to the previous version. The new deployment with the problem can then be fixed in the other area.

Canary testing/deployment

In some cases, swapping out the entire deployment via a blue/green environment may not be workable or desired. Another approach is known as canary testing/deployment. In this model, a portion of customer traffic is rerouted to new pieces of the product. For example, a new version of a search service in a product may be deployed alongside the current production version of the service. Then, 10% of search queries may be routed to the new version to test it out in a production environment.

If the new service handles the limited traffic with no problems, then more traffic may be routed to it over time. If no problems arise, then over time, the amount of traffic routed to the new service can be increased until 100% of the traffic is going to it. This effectively “retires” the previous version of the service and puts the new version into effect for all customers.

Feature toggles

For new functionality that may need to be easily backed out (in case a problem is found), developers can add a feature toggle. This is a software if-then switch in the code that only activates the code if a data value is set. This data value can be a globally accessible place that the deployed application checks to see whether it should execute the new code. If the data value is set, it executes the code; if not, it doesn't.

This gives developers a remote "kill switch" to turn off the new functionality if a problem is found after deployment to production.

Dark launch

In this practice, code is incrementally tested/deployed into production, but changes are not made visible to users (thus the "dark" name). For example, in the production release, some portion of web queries might be redirected to a service that queries a new data source. This information can be collected by development for analysis—without exposing any information about the interface, transaction, or results back to users.

The idea here is to get real information on how a candidate change would perform under a production load without impacting users or changing their experience. Over time, more load can be redirected until either a problem is found or the new functionality is deemed ready for all to use. Feature flags can be used actually to handle the mechanics of dark launches.

What is DevOps?

DevOps is a set of ideas and recommended practices around how to make it easier for development and operational teams to work together on developing and releasing software. Historically, development teams created products but did not install/deploy them in a regular, repeatable way, as customers would do. That set of install/deploy tasks (as well as other support tasks) were left to the operations teams to sort out late in the cycle. This often resulted in a lot of confusion and problems, since the operations team was brought into the loop late in the cycle and had to make what they were given work in a short timeframe. As well, development teams were often left in a bad position—because they had not sufficiently tested the product's install/deploy functionality, they could be surprised by problems that emerged during that process.

This often led to a serious disconnect and lack of cooperation between development and operations teams. The DevOps ideals advocate ways of doing things that involve both development and operations staff from the start of the cycle through the end, such as CD.

How does CD intersect with DevOps?

The CD pipeline is an implementation of several DevOps ideals. The later stages of a product, such as packaging and deployment, can always be done on each run of the pipeline rather than waiting for a specific point in the product development cycle. As well, both development and operations staff can clearly see when things work and when they don't, from development to deployment. For a cycle of a CD pipeline to be successful, it must pass through not only the processes associated with development but also the ones associated with operations.

Carried to the next level, DevOps suggests that even the infrastructure that implements the pipeline be treated like code. That is, it should be automatically provisioned, trackable, easy to change, and spawn a new run of the pipeline if it changes. This can be done by implementing the pipeline as code.

What is "pipeline-as-code"?

Pipeline-as-code is a general term for creating pipeline jobs/tasks via programming code, just as developers work with source code for products. The goal is to have the pipeline implementation expressed as code so it can be stored with the code, reviewed, tracked over time, and easily spun up again if there is a problem and the pipeline must be stopped. Several tools allow this, including Jenkins 2.

How does DevOps impact infrastructure for producing software?

Traditionally, individual hardware systems used in pipelines were configured with software (operating systems, applications, development tools, etc.) one at a time. At the extreme, each system was a custom, hand-crafted setup. This meant that when a system had problems or needed to be updated, that was frequently a custom task as well. This kind of approach goes against the fundamental CD ideal of having an easily reproducible and trackable environment.

Over the years, applications have been developed to standardize provisioning (installing and configuring) systems. As well, virtual machines were developed as programs that emulate computers running on top of other computers. These VMs require a supervisory program to run them on the underlying host system. And they require their own operating system copy to run.

Next came containers. Containers, while similar in concept to VMs, work differently. Instead of requiring a separate program and a copy of an OS to run, they simply use some existing OS constructs to carve out isolated space in the operating system. Thus, they behave similarly to a VM to provide the isolation but don't require the overhead.

Because VMs and containers are created from stored definitions, they can be destroyed and re-created easily with no impact to the host systems where they are running. This allows a re-creatable system to run pipelines on. Also, for containers, we can track changes to the definition file they are built from—just as we would for source code.

Thus, if we run into a problem in a VM or container, it may be easier and quicker to just destroy and re-create it instead of trying to debug and make a fix to the existing one.

This also implies that any change to the code for the pipeline can trigger a new run of the pipeline (via CI) just as a change to code would. This is one of the core ideals of DevOps regarding infrastructure.

How Do Gantt Charts Make Project Managers’ Lives Easier?

Do you want to have a way to see how tasks are progressing? Want to see what roles are there in a project and how others depend on them? If you’ve always wanted a quick view of how far behind or ahead of schedule your project is, then it’s time for Gantt charts.

We believe that you might have already heard about Gantt charts considering their popularity in the project management domain. As a new project manager or team leader, it’s absolutely fair on your part to have some apprehensions about them.

Fret not, we are going to clear all your doubts regarding Gantt charts, the benefits they offer, and their purpose in project management in this post. Before that, let’s learn a little bit about their history.

Historical Background

People often think that Henry Gantt was the man behind the Gantt charts, but in reality, it was Karol Adamiecki, a Polish engineer who devised these charts for better planning in 1896.

Adamiecki published his work only in Polish and Russian and the first ever Gantt chart was named as the Harmonogram. After some years, Gantt started working on these charts and made them popular, thus the name "Gantt charts."

Why Use Gantt Charts in Project Management

The best thing about Gantt charts is that they are sufficient in equipping you with the right tools to plan, manage, and schedule projects. Gantt chart software also helps you to automate processes, create dependencies, add milestones, and identify critical paths.

A Visual Timeline of Tasks

Gantt charts provide a visual timeline of the project so that you can schedule your tasks, plan, and iterate your projects quickly and more efficiently. One gets to see an overview of milestones and other important information that provides a clear picture of who’s working on what and other deadlines related to them. Such information plays a key role in effective project planning and tracking by bringing together everything you need to meet deadlines and deliver projects successfully.

Keeps Everyone on The Same Page

With Gantt charts, you get a unified view of all the projects at one central place, making it easy for you to handle team planning and scheduling. Also, the visual nature of these charts makes it easier for people working together to set mutually agreed upon efforts and work in unison to achieve the desired goal. It reduces any chances of misunderstanding among team members while working on difficult tasks as everyone is already on the same page.

A Better Understanding of Task Relationships

Often, a task is both dependent and related to other tasks as well. These charts help you understand how various tasks are interrelated. They also help you set dependencies between different tasks to reflect how a change in their scheduling is going to impact the overall project progress of a project. With a better understanding of task relationships, one can assure optimum workflow and maximized productivity.

Allocate Resources Effectively

A Gantt chart software helps you delegate work items to different people and allocate resources in a way without overloading anyone. By appropriately following the chart, you can adjust or share resources if someone in a team needs help. If resources know what to do when and are managed properly, there is a better chance of completing the project on time and within the desired budget, too.

Seamless Communication

Anyone working on a project doesn’t have to run to another team member to ask a question; you can communicate easily and seamlessly with a Gantt chart software. Once a plan is devised, approved, and started, forget about remembering who’s working on what as the visual nature of Gantt charts tell everything you need to know at one place. That’s how Gantt charts have made things easier and stress-free for project managers so that they can focus on getting things done.

Track the Project Progress

Whether your project is small or complex, one of the crucial things for a project manager is to see how a project is progressing and whether things are on track or not. Gantt charts show the complete percentage of every task being handled by team members that give an estimation of the time needed to get tasks done. Gantt charts are indeed one of the safest bets to predict the project progress and see if you need to change your strategy or not.

More Accountability

Every Gantt chart software comes with easy drag-and-drop for efficient scheduling. Whether it’s about scheduling start and end dates to rescheduling them or setting dependencies, everything works well with Gantt charts. Team members get a sense of accountability while moving tasks and the task completion bar constantly reminds them to deliver a project before the deadline.

More Clarity, Less Confusion

Gantt charts are simple and straightforward. Apart from its intuitiveness, they use the critical chain to highlight important tasks. Gantt charts highlight the critical path that helps you identify tasks which are directly impacting the overall progress of a project. The clarity helps team members to know what’s working and what’s not so that they can change their strategy to achieve their goals. This lessens the confusion and brings more clarity in the process.

Complete Projects on Time

As Gantt charts provide a unified view of tasks, projects, and resources, they help you focus your precious time, effort, and brainpower on things that actually matter. When team members visualize their efforts in a project and how the progress of the entire project is somehow dependent on them, it provides real motivation to them.

Stay Ahead Always

Not only one can stay on top of things with Gantt charts but they also help project managers to stay ahead of their schedule if they follow Gantt charts precisely. Project managers can analyze the team performance and figure out patterns that must be readjusted for better output.

Conclusion

By now you might have understood the importance of Gantt charts in a project manager’s life. However, if your work revolves around complex projects, you might want to go for a task management software that enables more than using a Gantt chart. There are many project management solutions with elaborated features to choose from. Get a free trial, and make the best choice.

Reference : https://dzone.com/articles/how-do-gantt-charts-make-project-managers-life-eas

What is the Most Popular Blockchain in the World?

Blockchain technology in on the rise and so are its applications, thanks to Bitcoin and Cryptocurrency for making blockchain a household name. The blockchain is not just an application. It is a technology that promises to bring trust, transparency, and accountability to digital transactions. Blockchain technology can be applied to almost any industry that involves digital transactions.

Most Popular Blockchain

In this article, I will review some of the most popular blockchains in the word.

If you’re new to Blockchain, I recommend start reading What Is Blockchain Technology.

Blockchain starts with Bitcoin. Bitcoin is one of the most searched keywords in Google. The following chart shows the popularity of blockchains.

 

The following Table lists the Top 15 most popular blockchains in the world. The report is based on the past 90 days of activity.

Rank

Blockchain

Trends

(Last 90 days)

Global volume

Traffic Rank

Reddit

Twitter

OverallScore

#1

Bitcoin

45

11M

14,497

1.0m

 

1.00

#3

Ethereum

5

2.0M

26,614

423k

438k

0.26

#4

EOS

11

469K

276,619

61.9k

192k

0.23

#5

NEO

4

410K

128,762

97.8k

316k

0.22

#6

TRON

6

545K

90,677

68.8k

366k

0.20

#7

Litecoin

3

1M

233,038

199k

437k

0.20

#8

Stellar

3

278K

58,476

98.7k

260k

0.20

#9

Waves

3

*

38,623

56.6k

135k

0.19

#10

Monero

<1

361K

84,112

151k

313k

0.17

#11

Dash

<1

*

84,217

23.2k

320k

0.12

#12

Cardano

<1

291K

100,820

70.5k

148k

0.12

#13

Verge

<1

*

294,930

53.7k

305k

0.10

#14

NEM

<1

236K

149,135

18.5k

215k

0.10

#15

Tezos

<1

82K

208,139

10.8k

39k

0.06

Please note, this report is based on an algorithm and data collected from various sources on the Internet. The rankings may change over time.

The Score of a blockchain is calculated based on the following factors.

 

  1. Keyword searches in Google
  2. Social media followers on various platforms
  3. Community size on platforms such as Twitter, Telegram, Discord
  4. Articles and content are written on the blockchain
  5. Market adoption and valuation
  6. CMC ranking
  7. Buzzwords and talk on the Web
  8. Meetup, user group events, hackathons, and conference participations

 

#1. Bitcoin 

 

Bitcoin King of Blockchain 

Bitcoin is the king of the blockchain. Bitcoin is the mother of all cryptocurrencies. Bitcoin is the reason we’re talking about blockchain today. Bitcoin was created by Satoshi Nakamoto and was released on Jan 9, 2009. Bitcoin is written in C++ programming language. Bitcoin project is an open source software project available to download from Github. Several cryptocurrencies have been created using the Bitcoin project and protocol. Blockchain has a limited supply of 21 million bitcoins.

Bitcoin is also a cryptocurrency, also known as digital currency, that is used for digital payments. Bitcoin’s market symbol is BTC. As of now, Bitcoin’s market cap is $64 billion. At one point in Jan 2018, Bitcoin’s market cap reached close to $330 billion when 1 BTC was close to US $21,000. Currently, 1 BTC trades around $3,600 according to CMC.

Bitcoin blockchain also has several forks. Some of the most popular Bitcoin forks are Bitcoin Cash, Bitcoin SV, Bitcoin Gold, and Bitcoin Diamond.

Bitcoin is an open source project available on Github for the public to download and get involved. Any developer can contribute to Bitcoin project. Thousands of developers have download Bitcoin project and have created their own versions of cryptocurrencies from the project.

Bitcoin was one of the most searched words on Google in 2018. Bitcoin’s global volume per month is 11 million searches with keyword difficulty of 96. The United States it the most popular country for Bitcoin followed by Germany, India, UK, and Brazil.

Bitcoin Global Volume 

 

Google Trends shows a significant drop in blockchain products searches from Jan 2018 to Jan 2019. The following graph shows a chart for Bitcoin, Ripple, Ethereum, EOS, and NEO from Jan 2018 to Jan 2019 and as you can see, the popularity of keywords have dropped to almost 95% within a year.

Blockchain Google TrendsIf you want to learn more about Bitcoin, check out What Is Bitcoin In Simplified Terms. 
 

#2. Ethereum 

Ethereum Blockchain 

 

Ethereum was created by Vitalik Buterin, Gavin Wood, and Joseph Lubin and was released to the public in 2015. Ethereum is written in Go, C++, and Rust.

Ethereum calls itself the “BLOCKCHAIN APP PLATFORM”. Ethereum is a decentralized software platform designed to create and execute digital smart contracts. Ethereum uses a new programming language called Solidity to write smart contracts. Ethereum blockchain is executed on the Ethereum Virtual Machine (EVM).

Ethereum has a cryptocurrency called Ether. Ether is the underlying token that fuels the Ethereum blockchain network. Ether’s public symbol is ETH. As of now, Current market cap of Ethereum is $13 billion. Currently, 1 ETH trades around $126 according to CMC. 

#3. EOSIO 

EOSIO Blockchain 

 

EOS.IO, authored by Daniel Larimer and Brendan Blumer, was developed by a private company, block.one. EOS was released to the public in 2018.

EOSIO calls itself “The most powerful infrastructure for decentralized applications”. EOS is an open source blockchain protocol that simulates an operating system and computer and allows developers to build decentralized software applications. EOS.IO is written in C++.

EOSIO is open source licensed under MIT software license. The software provides accounts, authentication, databases, asynchronous communication and the scheduling of applications across multiple CPU cores and/or clusters. The resulting technology is a blockchain architecture that has the potential to scale to millions of transactions per second, eliminates user fees and allows for quick and easy deployment of decentralized applications. 

#4. NEO 

NEO Blockchain 

 

NEO was authored by Da Hongfei and Erik Zhang and was released to the public in 2014. NEO is a blockchain platform and a cryptocurrency. NEO blockchain is designed to build decentralized apps.

NEO’s tagline is “An Open Network For Smart Economy”. NEO is an open source blockchain project available to download on Github. NEO is written in C#. NEO supports major popular programming languages including C#, JavaScript, Python, Java and Go.

NEO blockchain uses NEO tokens on the network that generates GAS tokens. GAS tokens are used to pay for transactions on the network. 

#5. TRON 

TRON Blockchain 

 

Raybo was founded in 2014 in Beijing and became China’s first blockchain company. TRON foundation was established in Singapore in 2017 and in Dec 2017, TRON launched its open source protocol. Justin Sun is the founder and CEO of TRON. TRON launched its MainNet on May 31, 2018.

TRON wants to “DECENTRALIZE THE WEB” and brands itself as, one of the largest blockchain-based operating systems in the world.

 

Key features of TRON are high throughput, high scalability, and high availability. TRON prides itself in higher TPS rate of 2,000 transactions per second, compared to Ethereum and Bitcoin at 35 TPS and 6 TPS per second.

TRON TPS 

 

Summary 

This article lists the top 15 blockchains in the world based on their popularity. Bitcoin is the most popular blockchain in the world.

If you’re new to the blockchain, start with “What is Blockchain” https://www.c-sharpcorner.com/article/what-is-blockchain/ and then read “Do I Need a Blockchain.”  

Further Blockchain Readings 

This article lists the top 15 blockchains in the world based on their popularity. Bitcoin is the most popular blockchain in the world.

What Is Blockchain

Do You Need a Blockchain

Top 5 Blockchain Programming Languages 

References 

 

  • Wikipedia
  • Respective blockchain products websites and their documentation
  • Various traffic analytics and reporting tools
  • Social media websites
  • Community websites and discussion groups

5 Trends In Fintech You Will See In 2019

5 Trends In Fintech You Will See In 2019

 
 

This year, the word “fintech” was mentioned in a Union Budget speech for the first time ever. It was an ambiguous 20th-century portmanteau. Today, fintech has pervaded our daily lives, impacting everyday money decisions. Fintech is the way to go for the financial empowerment of hundreds of millions of Indians.

Here’s how I feel 2019 would progress for the industry.

Consumer Traction Will Continue To Grow

More and more Indians will continue to turn to the internet to solve their money management problems. For millennials born in the age of the internet, their Internet-connected smartphones will be the gateway to the financial services industry.

Not just that, the number of internet users in India will continue to grow at a rapid pace: 500 Mn in 2018 as per IAMAI projections, and 700 Mn by 2020, as per other projections. Fintech will continue to churn out solutions for the internet-connected Indian.

Short-Term Lending To Gain Pace

Payday loans – short-term, unsecured loans – have been around for long in the west. But they’ve only recently started becoming popular in India. You’ll see not just the proliferation of lending startups but also see mainstream banks evolving short-term lending products.

 

Paperless Is Accelerating

The only way forward for fintech is paperless. A consumer should be able to buy her financial service from her smartphone, paperlessly, presence-lessly, without having to submit a sheet of paper or meeting a bank salesperson.

The Aadhaar verdict this year has shaped how eKYC for new account openings is done. New techniques of eKYC have also evolved, and we’re expecting to see some of them in action soon. For example, you may be able to complete your verification through video KYC.

Work is also going on towards making offline Aadhaar a possibility, wherein a user would be able to control the Aadhaar information she wishes to share with a service provider via XML. Offline Aadhaar will allow authentication without biometrics or the sharing of the Aadhaar number.

PMLA Amendments To Enable Paperless Banking

The Modi government has made amendments to the Telegraph Act as well as the Prevention of Money Laundering Act, following the Supreme Court’s Aadhaar verdict. This will pave way for the voluntary use of Aadhaar for new phone connections and bank accounts.

Therefore, not only will customers be able to instantly open accounts, there are now steeper penalties on entities who misuse Aadhaar data or business who withhold services for not sharing Aadhaar.

India is rapidly moving to paperless, presence-less delivery of financial products. With more first-time internet users entering the market, expect more developments and innovation in the customer onboarding space.

An employee walked into my room today and said, "I want to Quit".

An employee walked into my room today and said, " I want to Quit".

I looked at him and asked for a reason.

 He said, " Culture in the office is too demotivating. People are gossiping and have no interest in their work. I think I am losing my skill set."

I smiled and said, "Fine!". But before you put down your papers, carry a glass full of water in your hand and take 3 rounds in the office making sure you don't spill even a single drop of water on the ground.

He was confused but agreed.

He came back after some time and kept the glass on the table. 

I asked him while you were taking rounds did you see anyone gossiping or did anyone disturb you.

He said NO. Rather I didn't notice because I wanted to make sure that water doesn't fall on the ground

So that is the point. If you are too focused on the job in hand, then external things won't disturb you at all.

So stay focused and keep working.

Success will come your way.

Thank you All

Thank you all, the Team of doctors @Dr. Ketan Chaturvedi @Dr. Suhas Salpekar @Dr. Tushar Pande and @Wockhardt hospital team (they stayed available 24X7) when I was under tremendous pain anytime and they show patience while treating an irascible person like me. A Special thanks to @Dr. Sushant Admane my chaddi buddy friend my brother from another mother who co-ordinates throughout the teams of doctors around the world so I can get the best possible treatments from such a unique health issue. Lastly, thanks to my family and friend who supported me as always to come out from this situation. Thanks Again to All... :) Stay Healthy 

TECHGIG : Code Contest HomeThe Great Indian Programming League 2013 - May Edition - Visit a Colony

Problem Statement :

In a colony all houses are in a single line. We call a house either in a good condition or in a bad condition. If a house is good its score is 1 otherwise 0. We decided to perform the scoring of houses in a different way. 

We not only consider that particular house but also the 2 neighboring house.
 
New scoring strategy- The Score of a house if affected by three house, that house and its two neighbors
 
Score-1 : If any of the three houses is in good condition 
Score-2 : if any of the two houses is in good condition 
Score-3 : if all the three houses are in good condition 
 
Now Josheph has the score list of all the houses ( according to new strategy ) and he wants to know what is the condition ( good or bad) of his house just seeing the score of the houses. He assumes that first house is in good condition.

MyResult

Successfully compiledSuccessfully compiled
: 9: 90
 
Solution :
using System;
public class CandidateCode
{
public static int house_condition(int[] input1,int input2)
{
int[] score = new int[input1.Length];	
for(int i=0;i<input1.Length;i++)
{
if(i == 0)
{
score[i] = 1; //first house in good condition

if(i+1<input1.Length)
{
score[i+1] = input1[i]- score[i];	
}
}
else
{
if(i+1<input1.Length)
{	
score[i+1] = input1[i] - (score[i-1] + score[i]);
}
}
}
return score[input2-1];	
}
}

The Partition Problem - Algorithm [Solved]

Partition Problem Problem

Partition problem is the task of deciding whether a given multiset of positive integers can be partitioned into two subsets S1 and S2 such that the sum of the elements in S1 equals the sum of the elements in S2.

using System;
using System.Linq;
public class CandidateCode
{
	public static string partition(int[] input1)
	{
		
		bool[] best_assignment = PartitionValues(input1);
		
		string result1 = "", result2 = "";
            	int total1 = 0, total2 = 0;
            	for (int i = 0; i < best_assignment.Length; i++)
            	{
	        	 if (best_assignment[i])
			 {
	    		   result1 += "\r\n " + input1[i];
	    		   total1 += input1[i];
			 }
			else
		         {
	         	   result2 += "\r\n " + input1[i];
	               	   total2 += input1[i];
		         }
		}
            if (result1.Length > 0) result1 = result1.Substring(2);
            if (result2.Length > 0) result2 = result2.Substring(2);

		return "{"+ result1 + " } {" + result2 + " } total  " + total1.ToString() + " & " + total2.ToString();
	}
	
	private static bool[] PartitionValues(int[] values)
	{
		bool[] best_assignment = new bool[values.Length];
            	bool[] test_assignment = new bool[values.Length];
            	
               	int total_value = values.Sum();

            	int best_err = total_value;
            	PartitionValuesFromIndex(values, 0, total_value, test_assignment, 0, ref best_assignment, ref best_err);
            
            	return best_assignment;
	}
	
private static void PartitionValuesFromIndex(int[] values, int start_index, int total_value,
            bool[] test_assignment, int test_value,
            ref bool[] best_assignment, ref int best_err)
        {
            // If start_index is beyond the end of the array,
            // then all entries have been assigned.
            if (start_index >= values.Length)
            {
                // We're done. See if this assignment is better than what we have so far.
                int test_err = Math.Abs(2 * test_value - total_value);
                if (test_err < best_err)
                {
                    // This is an improvement. Save it.
                    best_err = test_err;
                    best_assignment = (bool[])test_assignment.Clone();
                }
            }
            else
            {
                // Try adding values[start_index] to set 1.
                test_assignment[start_index] = true;
                PartitionValuesFromIndex(values, start_index + 1, total_value,
                    test_assignment, test_value + values[start_index],
                    ref best_assignment, ref best_err);

                // Try adding values[start_index] to set 2.
                test_assignment[start_index] = false;
                PartitionValuesFromIndex(values, start_index + 1, total_value,
                    test_assignment, test_value,
                    ref best_assignment, ref best_err);
            }
        }
}

Implementing Singleton in C#

Implementing Singleton in C#

Context

You are building an application in C#. You need a class that has only one instance, and you need to provide a global point of access to the instance. You want to be sure that your solution is efficient and that it takes advantage of the Microsoft .NET common language runtime features. You may also want to make sure that your solution is thread safe.

Implementation Strategy

Even though Singleton is a comparatively simple pattern, there are various tradeoffs and options, depending upon the implementation. The following is a series of implementation strategies with a discussion of their strengths and weaknesses.

Singleton

The following implementation of the Singleton design pattern follows the solution presented in Design Patterns: Elements of Reusable Object-Oriented Software [Gamma95] but modifies it to take advantage of language features available in C#, such as properties:

 

using System;

public class Singleton
{
   private static Singleton instance;

   private Singleton() {}

   public static Singleton Instance
   {
      get 
      {
         if (instance == null)
         {
            instance = new Singleton();
         }
         return instance;
      }
   }
}
 

This implementation has two main advantages:

  • Because the instance is created inside the Instance property method, the class can exercise additional functionality (for example, instantiating a subclass), even though it may introduce unwelcome dependencies.

  • The instantiation is not performed until an object asks for an instance; this approach is referred to as lazy instantiation. Lazy instantiation avoids instantiating unnecessary singletons when the application starts.

 

The main disadvantage of this implementation, however, is that it is not safe for multithreaded environments. If separate threads of execution enter the Instance property method at the same time, more that one instance of the Singleton object may be created. Each thread could execute the following statement and decide that a new instance has to be created:

if (instance == null)

Various approaches solve this problem. One approach is to use an idiom referred to as Double-Check Locking [Lea99]. However, C# in combination with the common language runtime provides a static initialization approach, which circumvents these issues without requiring the developer to explicitly code for thread safety.

Static Initialization

 

 

One of the reasons Design Patterns [Gamma95] avoided static initialization is because the C++ specification left some ambiguity around the initialization order of static variables. Fortunately, the .NET Framework resolves this ambiguity through its handling of variable initialization:

 

public sealed class Singleton
{
   private static readonly Singleton instance = new Singleton();
   
   private Singleton(){}

   public static Singleton Instance
   {
      get 
      {
         return instance; 
      }
   }
}
 

In this strategy, the instance is created the first time any member of the class is referenced. The common language runtime takes care of the variable initialization. The class is marked sealed to prevent derivation, which could add instances. For a discussion of the pros and cons of marking a class sealed, see [Sells03]. In addition, the variable is marked readonly, which means that it can be assigned only during static initialization (which is shown here) or in a class constructor.

This implementation is similar to the preceding example, except that it relies on the common language runtime to initialize the variable. It still addresses the two basic problems that the Singleton pattern is trying to solve: global access and instantiation control. The public static property provides a global access point to the instance. Also, because the constructor is private, the Singleton class cannot be instantiated outside of the class itself; therefore, the variable refers to the only instance that can exist in the system.

Because the Singleton instance is referenced by a private static member variable, the instantiation does not occur until the class is first referenced by a call to the Instanceproperty. This solution therefore implements a form of the lazy instantiation property, as in the Design Patterns form of Singleton.

The only potential downside of this approach is that you have less control over the mechanics of the instantiation. In the Design Patterns form, you were able to use a nondefault constructor or perform other tasks before the instantiation. Because the .NET Framework performs the initialization in this solution, you do not have these options. In most cases, static initialization is the preferred approach for implementing a Singleton in .NET.

Multithreaded Singleton

 

 

Static initialization is suitable for most situations. When your application must delay the instantiation, use a non-default constructor or perform other tasks before the instantiation, and work in a multithreaded environment, you need a different solution. Cases do exist, however, in which you cannot rely on the common language runtime to ensure thread safety, as in the Static Initialization example. In such cases, you must use specific language capabilities to ensure that only one instance of the object is created in the presence of multiple threads. One of the more common solutions is to use the Double-Check Locking [Lea99] idiom to keep separate threads from creating new instances of the singleton at the same time.

 

Note: The common language runtime resolves issues related to using Double-Check Locking that are common in other environments. For more information about these issues, see "The 'Double-Checked Locking Is Broken' Declaration," on the University of Maryland, Department of Computer Science Web site, athttp://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html.

 

The following implementation allows only a single thread to enter the critical area, which the lock block identifies, when no instance of Singleton has yet been created:

 

using System;

public sealed class Singleton
{
   private static volatile Singleton instance;
   private static object syncRoot = new Object();

   private Singleton() {}

   public static Singleton Instance
   {
      get 
      {
         if (instance == null) 
         {
            lock (syncRoot) 
            {
               if (instance == null) 
                  instance = new Singleton();
            }
         }

         return instance;
      }
   }
}
 

This approach ensures that only one instance is created and only when the instance is needed. Also, the variable is declared to be volatile to ensure that assignment to the instance variable completes before the instance variable can be accessed. Lastly, this approach uses a syncRoot instance to lock on, rather than locking on the type itself, to avoid deadlocks.

This double-check locking approach solves the thread concurrency problems while avoiding an exclusive lock in every call to the Instance property method. It also allows you to delay instantiation until the object is first accessed. In practice, an application rarely requires this type of implementation. In most cases, the static initialization approach is sufficient.

Resulting Context

Implementing Singleton in C# results in the following benefits and liabilities:

Benefits

 

  • The static initialization approach is possible because the .NET Framework explicitly defines how and when static variable initialization occurs.

  • The Double-Check Locking idiom described earlier in "Multithreaded Singleton" is implemented correctly in the common language runtime.

Liabilities

 

 

If your multithreaded application requires explicit initialization, you have to take precautions to avoid threading issues.

Acknowledgments

[Gamma95] Gamma, Helm, Johnson, and Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1995.

[Lea99] Lea, Doug. Concurrent Programming in Java, Second Edition. Addison-Wesley, 1999.

[Sells03] Sells, Chris. "Sealed Sucks." sellsbrothers.com News. Available at: http://www.sellsbrothers.com/news/showTopic.aspx?ixTopic=411.

 

Inheritance

Inheritance :

  • Inheritance is a way to form new classes (instances of which are called objects) using classes that have already been defined.
  • Inheritance is employed to help reuse existing code with little or no modification.
  • The new classes, known as Sub-class or derived class, inherit attributes and behavior of the pre-existing classes, which are referred to as Super-class or Base class.

C# supports two types of Inheritance mechanisms

  1. Implementation Inheritance
  2. Interface Inheritance

Implementation Inheritance:

When a class (type) is derived from another class (type) such that it inherits all the members of the base type it is Implementation Inheritance

Interface Inheritance:

When a type (class or a struct) inherits only the signatures of the functions from another type it is Interface Inheritance.

Benefits of using Inheritance

  • Once a behavior (method) or property is defined in a super class (base class),that behavior or property is automatically inherited by all subclasses (derived class).
  • Code reusability increased through inheritance
  • Inheritance provide a clear model structure which is easy to understand without much complexity
  • Using inheritance, classes become grouped together in a hierarchical tree structure
  • Code are easy to manage and divided into parent and child classes
public class ParentClass
{
    public ParentClass()
    {
        Console.WriteLine("Parent Constructor.");
    }
    public void print()
    {
        Console.WriteLine("Parent Class.");
    }
}
public class ChildClass : ParentClass
{
    public ChildClass()
    {
        Console.WriteLine("Child Constructor.");
    }
    public static void Main()
    {
        ChildClass child = new ChildClass();
        child.print();
    }
}

Output:

Parent Constructor.

Child Constructor.

Parent Class.

Cheap MLB Jerseys maillot de foot pas cher www.fotballdrakter.org