Sumedh Meshram

A Personal Blog

Which Country Has The Best Programming Language Programmer?

Programming Language is at the heart of every technological innovation. Therefore, a country with best computer programmers can be considered technologically advanced in today's world. Comparing countries to determine which of them have the best computer programmers is slightly complicated as different countries have different popular size. Luckily, HackerRank makes it easy with their own set of metrics to measure the excellence of the programmers of different countries. According to HackerRank, the following is the list of the top 10 countries with the best computer programmers.
 
1. China - The reason for China to occupy the top position is not its population. The metrics for the evaluation of the best computer programmers is speed and accuracy. HackerRank holds special challenges on its website annually to determine the best programmers countrywise. The challenges focus on coding skills, data structure and algorithmic concepts, mathematical and analytical skills, and functional programming. The participants from China have outshined all other countries collectively.
2. Russia - It is said that Russia has the best hackers in the world and the world has allegedly seen their hacking skills. To be a hacker, you need to be a programmer to the topmost level. Russian programmers scored 99.9 when China programmers scored the full mark. However, they have come out better than China in algorithms.
 
3. Poland - This can come as a surprise to many as Poland is not known to many as a country with many multi-national tech companies. However, if you know how good the education system is in Poland, you will not wonder why they have managed to rank so high. Computer programming is taught in the lower classes in schools. Therefore, by the time the students go out of high school, they have master computer programming languages like Java and Python. This is also reflected by the fact that Poland programmers have won Java challenge on HackerRank ahead of all other countries.
 
4. Switzerland - Switzerland is the country with headquarters of multiple international tech companies. In fact, Switzerland computer programmers are the most dominant on the scoreboard of HackerRank challenges. It is interesting to note Switzerland is where one of the foremost computer programming languages Pascal has come from. Besides, Switzerland is among the leading countries in the Global Innovation Index.
 
5. Hungary - The Hungarian Government has introduced programming classes in primary and secondary schools, and therefore, the students are grooming to be programmers from childhood. They have the best performance in tutorial challenges on HackerRank. It is somewhat surprising to many that among various other technologically advanced European countries, Hungary is among top 5. It is all about the education system and grooming from an early stage.
6. Japan - Japan is now known as the country of cryptocurrency. The revolutionary blockchain technology has originated from Japan and now ruling the world. In fact, according to HackerRank challenges, Japan is the leader in artificial intelligence. This only shows the intelligence and skill set of the Japanese computer programmers. Japan has literally transformed in the last decade and labeled as one of the leaders in innovations.
 
7. Taiwan - Taiwan, and China go hand in hand, and Taiwan is considered to be one of the most advanced countries in technologies. They are super fast in adapting to the new programming languages, and according to a survey, Python is the most dominant language in the country. On HackerRank, computer programmers from Taiwan are one of the leaders in algorithms, data structure, and functional programming challenges. Therefore, the programmers are an all-rounder, and it is this all-around growth that is accelerating the country to a new height in the technological field.
 
8. France - The French Government made major changes in the education system to inspire students to become computer programmers.  Just like Poland, they have started to offer programming classes in elementary schools since 2014, and the result is here to see. Their rank is decreasing every year, and they are climbing up faster than most countries on HackerRank board. 9. Czech Republic -
 
9. Czech Republic - According to HackerRank, Czech Republic has the most dominating computer programmers in shell scripting, and it is proved through several challenges they have held. The programmers also rank second in the mathematical challenges which reflect their skill in functional programming.
 
10. Italy - Italy is slowly but steadily becoming one of the emerging countries in computer programming. Big companies are investing hugely in Italy to bag the top programmers in the country. Apple announced a new school for nearly 1000 programmers in Italy. The programmers from the country have performed exceedingly well on HackerRank in database and tutorial challenges. Some of you might be surprised to find that the US or India do not feature among the top 10 countries. India ranks at 31st while the US ranks at 13th as per HackerRank ranking based on challenges organized on the website. 
 

 

Source: HOB

 

 

Cutting Edge - REST and Web API in ASP.NET Core

I’ve never been a fan of ASP.NET Web API as a standalone framework and I can’t hardly think of a project where I used it. Not that the framework in itself is out of place or unnecessary. I just find that the business value it actually delivers is, most of the time, minimal. On the other hand, I recognize in it some clear signs of the underlying effort Microsoft is making to renew the ASP.NET runtime pipeline. Overall, I like to think of ASP.NET Web API as a proof of concept for what today has become ASP.NET Core and, specifically, the new runtime environment of ASP.NET Core.

Web API was primarily introduced as a way to make building a RESTful API easy and comfortable in ASP.NET. This article is about how to achieve the same result—building a RESTful API—in ASP.NET Core.

The Extra Costs of Web API in Classic ASP.NET

ASP.NET Web API was built around the principles sustaining the Open Web Interface for .NET (OWIN) specification, which is meant to decouple the Web server from hosted Web applications. In the .NET space, the introduction of OWIN marked a turning point, where the tight integration of IIS and ASP.NET was questioned. That tight coupling was fully abandoned in ASP.NET Core.

Any Web façade built using the ASP.NET Web API framework relies on a completely rewritten pipeline that uses the standard OWIN interface to dialog with the underlying host Web server. Yet, an ASP.NET Web API is not a standalone application. To be available for callers it needs a host environment that takes care of listening to some configured port, captures incoming requests and dispatches them down the Web API pipeline.

A Web API application can be hosted in a Windows service or in a custom console application that implements the appropriate OWIN interfaces. It can also be hosted by a classic ASP.NET application, whether targeting Web Forms or ASP.NET MVC. Over the past few years, hosting Web API within a classic ASP.NET MVC application proved to be a very common scenario, yet one of the least effective in terms of raw performance and memory footprint.

As Figure 1 shows, whenever you arrange a Web API façade within an ASP.NET MVC application, three frameworks end up living side-by-side, processing every single Web API request. The host ASP.NET MVC application is encapsulated in an HTTP handler living on top of system.web—the original ASP.NET runtime environment. On top of that—taking up additional memory—you have the OWIN-based pipeline of Web API.

Frameworks Involved in a Classic ASP.NET Web API Application 
Figure 1 Frameworks Involved in a Classic ASP.NET Web API Application

The vision of introducing a server-independent Web framework is, in this case, significantly weakened by the constraints of staying compatible with the existing ASP.NET pipeline. Therefore, the clean and REST-friendly design of Web API doesn’t unleash its full potential because of the legacy system.web assembly. From a pure performance perspective, only some edge use cases really justify the use of Web API.

Effective Use Cases for Web API

Web API is the most high-profile example of the OWIN principles in action. A Web API library runs behind a server application that captures and forwards incoming requests. This host can be a classic Web application on the Microsoft stack (Web Forms, ASP.NET MVC) or it can be a console application or a Windows service.

In any case, it has to be an application endowed with a thin layer of code capable of dialoging with the Web API listener.

Hosting a Web API outside of the Web environment removes at the root any dependency on the system.web assembly, thus magically making the request pipeline as lean and mean as desired.

This is the crucial point that led the ASP.NET Core team to build the ASP.NET Core pipeline. The ideal hosting conditions for Web API have been reworked to be the ideal hosting conditions for just about any ASP.NET Core application. This enabled a completely new pipeline devoid of dependencies on the system.web assembly and hostable behind an embedded HTTP server exposing a contracted interface—the IServer interface.

The OWIN specification and Katana, the implementation of it for the IIS/ASP.NET environment, play no role in ASP.NET Core. But the experience with these platforms matured the technical vision (especially with Web API edge cases), which shines through the dazzling new pipeline of ASP.NET Core.

The funny thing is that once the entire ASP.NET pipeline was redesigned—deeply inspired by the ideal hosting environment for Web API—that same Web API as a separate framework ceased to be relevant. In the new ASP.NET Core pipeline there’s the need for just one application model—the MVC application model—based on controllers, and controller classes are a bit richer than in classic ASP.NET MVC, thus incorporating the functions of old ASP.NET controllers and Web API controllers.

Extended ASP.NET Core Controllers

In ASP.NET Core, you work with controller classes whether you intend to serve HTML or any other type of response, such as JSON or PDF. A bunch of new action result types have been added to make building RESTful interfaces easy and convenient. Content negotiation is fully supported for any controller classes, and formatting helpers have been baked into the action invoker infrastructure. If you want to build a Web API that exposes HTTP endpoints, all you do is build a plain controller class, as shown here:

 
public class ApiController : Controller
{
  // Your methods here
}

The name of the controller class is arbitrary. While having /api somewhere in the URL is desirable for clarity, it’s in no way required. You can have /api in the URL being invoked both if you use conventional routing (an ApiController class) to map URLs to action methods, or if you use attribute routing. In my personal opinion, attribute routing is probably preferable because it allows you to expose multiple endpoints with the same /api item in the URL, while being defined in distinct, arbitrarily named controller classes.

The Controller class in ASP.NET Core has a lot more features than the class in classic ASP.NET MVC, and most of the extensions relate to building a RESTful Web API. First and foremost, all ASP.NET Core controllers support content negotiation. Content negotiation refers to a silent negotiation taking place between the caller and the API regarding the actual format of returned data.

Content negotiation doesn’t happen all the time and for just every request. It takes place only if the incoming request contains an Accept HTTP header that advertises the MIME types the caller is able to understand. In this case, the ASP.NET Core infrastructure goes through the types listed in the header content until it finds one for which a formatter exists in the current configuration of the application. If no matching formatter is found in the list of types, then the default JSON formatter is used, like so:

 
[HttpGet]
public ObjectResult Get(Guid id)
{
  // Do something here to retrieve the resource data
  var data = FindResourceDataInSomeWay(id);
  return Ok(data);
}

Another remarkable aspect of content negotiation is that while it won’t produce any change in the serialization process without an Accept HTTP header, it’s technically triggered only if the response being sent back by the controller is of type ObjectResult. The most common way to return an ObjectResult action result type is by serializing the response via the Ok method. It’s important to note that if you serialize the controller response via, say, the Json method, no negotiation will ever take place regardless of the headers sent. Support for output formatters can be added programmatically through the options of the AddMvc method. Here’s an example:

 
services.AddMvc(options =>
{
  options.OutputFormatters.Add(new PdfFormatter());
});

In this example, the demo class PdfFormatter contains internally the list of supported MIME types it can handle.

Note that by using the Produces attribute you override the content negotiation, as shown here:

 
[Produces("application/json")]
public class ApiController : Controller
{
  // Action methods here
}

The Produces attribute, which you can apply at the controller or method level, forces the output of type ObjectResult to be always serialized in the format specified by the attribute, regardless of the Accept HTTP header.

For more information on how to format the response of a controller method, you might want to check out the content at bit.ly/2klDgdY.

REST-Oriented Action Result Types

Whether a Web API is better off with a REST design is a highly debatable point. In general, it’s safe enough to say that the REST approach is based on a known set of rules and, in this regard, it is more standard. For this reason, it’s generally recommended for a public API that’s part of the enterprise business. If the API exists only to serve a limited number of clients—mostly under the same control of the API creators—then no real business difference exists between using REST design route or a looser remote-procedure call (RPC) approach.

In ASP.NET Core, there’s nothing like a distinct and dedicated Web API framework. There are just controllers with their set of action results and helper methods. If you want to build a Web API whatsoever, you just return JSON, XML or whatever else. If you want to build a RESTful API, you just get familiar with another set of action results and helper methods. Figure 2 presents the new action result types that ASP.NET Core controllers can return. In ASP.NET Core, an action result type is a type that implements the IActionResult interface. 

Figure 2 Web API-Related Action Result Types  

Type Description
AcceptedResult Returns a 202 status code. In addition, it returns the URI to check on the ongoing status of the request. The URI is stored in the Location header.
BadRequestResult Returns a 400 status code.
CreatedResult Returns a 201 status code. In addition, it returns the URI of the resource created, stored in the Location header.
NoContentResult Returns a 204 status code and null content.
OkResult Returns a 200 status code.
UnsupportedMediaTypeResult Returns a 415 status code.


Note that some of the types in Figure 2 come with buddy types that provide the same core function but with some slight differences. For example, in addition to AcceptedResult and CreatedResult, you find xxxAtActionResult and xxxAtRouteResult types. The difference is in how the types express the URI to monitor the status of the accepted operation and the location of the resource just created. The xxxAtActionResult type expresses the URI as a pair of controller and action strings whereas the xxxAtRouteResult type uses a route name.

OkObjectResult and BadRequestObjectResult, instead, have an xxxObjectResult variation. The difference is that object result types also let you append an object to the response. So OkResult just sets a 200 status code, but OkObjectResult sets a 200 status code and appends an object of your choice. A common way to use this feature is to return a ModelState dictionary updated with the detected error when a bad request is handled.

Another interesting distinction is between NoContentResult and EmptyResult. Both return an empty response, but NoContentResult sets a status code of 204, whereas EmptyResult sets a 200 status code. All this said, building a RESTful API is a matter of defining the resource being acted on and arranging a set of calls using the HTTP verb to perform common manipulation operations. You use GET to read, PUT to update, POST to create a new resource and DELETE to remove an existing one. Figure 3 shows the skeleton of a RESTful interface around a sample resource type as it results from ASP.NET Core classes.

Figure 3 Common RESTful Skeleton of Code
 
[HttpGet]
public ObjectResult Get(Guid id)
{
  // Do something here to retrieve the resource
  var res = FindResourceInSomeWay(id);
  return Ok(res);
}
[HttpPut]
public AcceptedResult UpdateResource(Guid id, string content)
{
  // Do something here to update the resource
  var res = UpdateResourceInSomeWay(id, content);
  var path = String.Format("/api/resource/{0}", res.Id);
  return Accepted(new Uri(path));  
}
[HttpPost]
public CreatedResult AddNews(MyResource res)
{
  // Do something here to create the resource
  var resId = CreateResourceInSomeWay(res);
  // Returns HTTP 201 and sets the URI to the Location header
  var path = String.Format("/api/resource/{0}", resId);
  return Created(path, res);
}
[HttpDelete]
public NoContentResult DeleteResource(Guid id)
{
  // Do something here to delete the resource
  // ...
  return NoContent();
}

If you’re interested in further exploring the implementation of ASP.NET Core controllers for building a Web API, have a look at the GitHub folder at bit.ly/2j4nyUe.

Wrapping Up

A Web API is a common element in most applications today. It’s used to provide data to an Angular or MVC front end, as well as to provide services to mobile or desktop applications. In the context of ASP.NET Core, the term “Web API” finally achieves its real meaning without ambiguity or need to further explain its contours. A Web API is a programmatic interface comprising a number of publicly exposed HTTP endpoints that typically (but not necessarily) return JSON or XML data to callers. The controller infrastructure in ASP.NET Core fully supports this vision with a revamped implementation and new action result types. Building a RESTful API in ASP.NET has never been easier!

5 tools for programmers to increase productivity

 

Programming complex code is undoubtedly a difficult task. Programmers often rely on certain online tools to make life easier and achieve speed and accuracy. These tools allow developers to create, test and debug the software. 

With constant technological advancements, developers are looking to enhance their productivity and stay updated with the evolving skill requirements. Here are some tools that programmers must explore to be more productive. 

#1. GitKraken

Quoting from their website, “Axosoft GitKraken is a cross-platform Git client with efficiency, elegance and reliability at the core. It is made for developers by developers”. 

GitKraken is known for its user-friendly interface, easy switching between projects and graphical interface which helps developers to visualize project branches effectively. 

#2. Visual Studio Code

Assuring a frictionless edit-build-debug cycle, VS code ensures high productivity while syntax highlighting, bracket-matching, box-selection and more. Additionally, it supports a wide variety of languages. For debugging, VS provides an interactive debugger to inspect codes and execute commands. 

#3. Docker

Docker is an open source tool which enables developers to create, deploy and run applications using containers. This guarantees that the application will run efficiently across Linux platforms irrespective of the undergone customizations. Docker requires applications to be shipped with things that are not already running on the host computer, significantly boosting the performance. 

#4. Chrome DevTools 

Chrome DevTools is a set of tools built directly into the Google Chrome browser. Websites can be designed better and faster using this tool as it allows the developers to edit pages on-the-go and rectify problems swiftly. It caters to the needs of both beginners and experts by teaching the basics as well as performing higher-level operations like optimizing website speed. 

#5. Postman 

Through design, testing and full production, Postman simplify API development for developers ensuring greater productivity. Developers can create automated tests to monitor their API and examine responses for debugging among other functions. With almost 6 million users, Postman is a widely used productivity tool within the developer community.

25 basic Linux terminal commands to remember

25 basic Linux terminal commands to remember

On Linux, the command-line is a powerful tool. Once you understand how to use it, it’s possible to accomplish a whole lot of advanced operations really fast. Sadly, new users find the Linux command-line confusing, and don’t know where to start.

In an effort to educate new users on the Linux command-line, we’ve made a list of 25 basic Linux terminal commands to remember. Let’s get started!

1. ls

ls is the list directory command. In order to use it, launch a terminal window and type the command ls.

 
 

 

 
 

 

ls

The ls command can also be used to reveal hidden files with the “a” command line switch.

ls -a

2. cd

cd is how you change directories in the terminal. To swap to a different directory from where the terminal started, do:

cd /path/to/location/

It is also possible to go backwards up a directory by using “..”.

cd ..

3. pwd

To show the current directory in the linux terminal use the pwd command.

 
 

 

 
 

 

pwd

4. mkdir

If you’d like to create a new folder, use the mkdir command.

mkdir

To preserve the permissions of the folder to match the permissions of the directory that came before it, use the “p” command line switch.

mkdir -p name-of-new-folder

5. rm

To delete a file from the command line, use the rm command.

rm /path/to/file

rm can also be used to delete a folder if there are files inside of it by making use of the “rf” command line switch.

rm -rf /path/to/folder

6. cp

Want to make a copy of a file or folder? Use the cp command.

 
 

 

 
 

 

To copy a file, use cp followed by the location of the file.

cp /path/to/file

Or, to copy a folder, use cp with the “r” command line switch

cp -r /path/to/folder

7. mv

The mv command can do a lot of things on Linux. It can move files around to different locations, but it can also rename files.

To move a file from one location to another, try the following example.

mv /path/to/file /place/to/put/file|

If you want to move a folder, write the location of the folder followed by the desired location where you’d like to move it.

mv /path/to/folder /place/to/put/folder/

Lastly, to rename a file or folder, cd into the directory of the file/folder you’d like to rename, and then use the mv command, for example:

mv name-of-file new-name-of-file

Or, for a folder, do:

mv name-of-folder new-name-of-folder

8. cat

The cat command lets you view the contents of files in the terminal. To use cat write the command out followed by the location of the file you’d like to view. For example:

 
 
cat /location/of/file

9. head

Head lets you view the top 10 lines of a file. To use it, enter the head command followed by the location of the file.

head /location/of/file

10. tail

Tail lets you view the bottom 10 lines of a file. To use it, enter the tail command followed by the location of the file.

tail /location/of/file

11. ping

On Linux, the ping command lets you check the latency between your network and a remote internet or LAN server.

 
 
ping website.com

Or

ping IP-address

To ping only a few times, use the ping command followed by the “c” command line switch and a number. For example, to ping Google 3 times, do:

ping google.com -c3

12. uptime

To check how long your Linux system has been online, use the uptime command.

uptime

13. uname

The uname command can be used to view your current distribution codename, release number, and even the version of Linux you are using. To use uname, write the command followed by the “a” command line switch.

Using the “a” command line switch prints out all information, so it’s best to use this instead of all other options.

uname -a

14. man

The man command lets you view the instruction manual of any program. To take a look at the manual, run the man command followed by the name of the program. For example, to view the manual of cat, run:

man cat

15. df

Df is a way to easily view how much space is taken up on the file system(s) on Linux. To use it, write the dfcommand.

df

To make df more easily readable, use the “h” command line switch. This puts the output in “human readable” mode.

df -h

16. du

Need to view the space that a directory on your system is taking up? Make use of the du command. For example, to see how big your /home/ folder is, do:

du ~/

To make the du output more readable, try the ‘hr” command-line switch. This will put the output in “human readable” mode.

du ~/ -hr

17. whereis

With whereis, it’s possible to track down the exact location of an item in the command-line. For example, to find the location of the Firefox binary on your Linux system, run:

whereis firefox

18. locate

Searching  for files, programs and folders on the Linux command-line is made easy with locate. To use it, just write out the locate command, followed by a search term.

locate search-term

19. grep

With the grep command, it’s possible to search for a pattern. A good example use of the grep command is to use it to filter out a specific line of text in a file.

Understand that grep isn’t a command that should ever be run by itself. Instead, it must be combined, like so:

cat text-file.txt | grep 'search term'

Essentially, to use grep to search for patterns, remember this formula:

command command-operations | grep 'search term'

20. ps

To view current running processes directly from the Linux terminal, make use of the ps command.

ps

Need a more full, detailed report of processes? Run ps with aux.

ps aux

21. kill

Sometimes, you need to kill a problem program. To do this, you’ll need to take advantage of the kill command. For example, to close Firefox, do the following.

First, use pidof to find the process number for Firefox.

pidof

Then, kill it with the kill command.

kill process-id-number

Still won’t close? Use the “9” command-line switch.

kill -9 process-id-number

22. killall

Using the killall command, it’s possible to end all instances of a running program. To use it, run the killall command followed by the name of a program. For example, to kill all running Firefox processes, do:

killall firefox

23. curl

Need to download a file from the internet through the Linux terminal? Use curl! To start a download, write the curl command followed by the file’s URL, the symbol and the location you’d like to save it. For example:

curl https://www.download.com/file.zip > ~/Downloads/file.zip

24. free

Running out of memory? Check your swap space and free RAM space with the free command.

free

25. chmod

With chmod, it’s possible to update the permissions of a file or folder.

To update the permissions of a file so everyone on the PC can read, write and execute it, do:

chmod +rwx /location/of/file-or/folder/

To update the permissions so only the owner has access, try:

chmod +rw

To update permissions for a specific group or world on the Linux system, run:

chmod +rx

Conclusion

The Linux command-line has endless actions and operations to know, and even after getting through this list, you’ll still have a lot more to learn. That said, this list is sure to help beef up your command-line knowledge. Besides, everyone has to start somewhere!

6 Most Demanded Programming Languages of 2019

6 Most Demanded Programming Languages of 2019

ideal programming languages

Learning the right programming language at the right time is very important. If you are a student or an aspiring software developer who is planning to learn a new programming language, you should check the trend once.

There are many job portals and trend analysis websites who releases the list of popular languages at a regular interval of time. These lists not only help students and professional to get an idea about the most in-demand languages out there but also shed some light on jobs availability. Today, I will share seven most demanding programming languages based on the number of jobs available on Indeed in January 2019.

Most In-Demand Programming Languages of 2019

 

1. Java – 65,986 jobs

Java was developed by James Gosling at Sun Microsystems and later acquired by the Oracle Corporation. This is one of the most used languages in the world. Considering the number, the number of jobs postings have been grown by 6% as compared to the last year.

Java is based on the “write once, run everywhere (WORA)” concept. When you compile Java code, it’s converted into bytecode, and the can run on any platform with any need of recompilation. That’s why it’s also called a platform-independent language.

Read: 5 Important Tips to Become a Good Java Developer

2. Python – 61,818 jobs

Python was developed by a Dutch programmer, Guido van Rossum. It can be considered as one of the fastest growing programming languages. Python has seen a growth of around 24% in terms of job postings with 61,000 job postings as compared to last year’s 46,000.

It’s a high-level object-oriented programming language that offers a wide range of third-party libraries and extensions to programmers. Developers also say Python is simple and easy to learn. This language is also used to decrease the time and cost spent on application maintenance.

Read: 10 Best Python Courses For Programmers and Developers

3. JavaScript – 38,018 jobs

JavaScript is the third most popular programming language in our list. It’s inspired by Java and developed by American technologist, Brendan Eich. This year JavaScript job postings haven’t seen much changes, but still managed to secure the third position.

Unlike other languages, JavaScript can’t be used to develop apps or applets. It’s fast and doesn’t need to be compiled before use. JavaScript enables our code to interact with the browser and can even change or update both HTML and CSS.

Also Read: Best Courses to Learn JavaScript Programming Online

4. C++ – 36,798 jobs

Though there are many programming languages available today, the power of C++ can’t be ignored. Developed by Danish computer scientist Bjarne Stroustrup, C++ is widely used for game development, firmware development, system development, client-server applications, drivers, etc. C++ is actually an advanced version of C, with object-oriented programming capabilities. Its popularity grew by 16.22% as compared to the last year’s job postings.

Read: 6 Best IDEs For C and C++ Programming Language

5. C# – 27,521 jobs

C# is popularly used for Windows program development under Microsoft’s proprietary .NET framework. It’s mainly used for implementing back-end services, and database applications. It’s a hybrid of C++ and C languages. If you talk about the numbers, C#’s job postings didn’t grow that much but it’s still one of the most demanded languages.

Read: Difference Between C, C++, Objective-C and C# Programming Language

6. PHP – 16,890 jobs

One of the most popular language used in web development, Hypertext Preprocessor or PHP may be losing its essence in recent years. It’s an open source scripting language developed by a Danish-Canadian programmer.

Though the community is working hard to provide support, competing with python and other newcomers seems difficult. PHP is commonly used to retrieve data from the database and use on web pages. Its job postings are increased by 2,000 as compared to last year.

Read: Is PHP a Scripting or a Programming Language?

I hope you have got an idea and be able to decide which programming language you should learn in 2019. Whatever language you choose, first try to build the base the learning fundamentals, then start attempting small problems and ultimately move to medium and large projects.

Visual Studio Code Keyboard Shortcut For Windows

Introduction

 
In this article, we will learn some Visual Studio Code keyboard shortcuts while working on a Windows machine. Visual Studio Code keyboard shortcuts are helpful to the developers in working faster and more efficiently and for boosting their working performance. Keyboard shortcuts are keys or combinations of keys that provide an alternative way to do something. These shortcuts can provide an easier and quicker method of using Visual Studio Code.
 
Visual Studio Code Keyboard Shortcut For Windows 
 
I have categorized all the shortcut keys into the following categories.
  • General Shortcuts
  • Basic Editing Shortcuts
  • Navigation Shortcuts
  • Toggle Tab Moves focusShortcuts
  • Multi-Cursor and selectionShortcuts
  • Rich Languages EditingShortcuts
  • Editor ManagementShortcuts
  • File ManagementShortcuts
  • DebugShortcuts
  • Integrated terminal Shortcuts
We can also check all shortcuts keys using the following command. 
 
  1. Ctrl+k Ctrl+S  
or like this -
 
Visual Studio Code Keyboard Shortcut For Windows
 
Visual Studio Code Keyboard Shortcut For Windows
 
General Shortcuts
 
Shortcut Key Descriptions
Ctrl+Shift+P, F1 Show Command Palette
Ctrl+P Quick Open, Go to File
Ctrl+Shift+N New window
Ctrl+Shift+W Close window
Ctrl+, User Settings
Ctrl+K Ctrl+S Keyboard Shortcuts
 
Basic Editing Shortcuts
 
Shortcut Key Descriptions
Ctrl+X Cut line
Ctrl+C Copy line
Alt+ ↑ / ↓ Move line up/down
Shift+Alt + ↓ / ↑ Copy line up/down
Ctrl+Shift+K Delete line
Ctrl+Enter Insert line below
Ctrl+Shift+Enter Insert line above
Ctrl+Shift+\ Jump to matching bracket
Ctrl+] / [ Indent/outdent line
Home / End Go to beginning/end of line
Ctrl+Home Go to beginning of file
Ctrl+End Go to end of file
Ctrl+↑ / ↓ Scroll line up/down
Alt+PgUp / PgDn Scroll page up/down
Ctrl+Shift+[ Fold (collapse) region
Ctrl+Shift+] Unfold (uncollapse) region
Ctrl+K Ctrl+[ Fold (collapse) all subregions
Ctrl+K Ctrl+] Unfold (uncollapse) all subregions
Ctrl+K Ctrl+0 Fold (collapse) all regions
Ctrl+K Ctrl+J Unfold (uncollapse) all regions
Ctrl+K Ctrl+C Add line comment
Ctrl+K Ctrl+U Remove line comment
Ctrl+/ Toggle line comment
Shift+Alt+A Toggle block comment
Alt+Z Toggle word wrap
 
Navigation Shortcuts
 
 
Shortcut Key Descriptions
Ctrl+T Show all Symbols
Ctrl+G Go to Line
Ctrl+P Go to File
Ctrl+Shift+O Go to Symbol
Ctrl+Shift+M Show Problems panel
F8 Go to the next error
Shift+F8 Go to previous error
Ctrl+Shift+Tab Navigate editor group history
Alt+ ← / → Go back / forward
Ctrl+M Toggle Tab moves the focus
 
Toggle Tab moves focus Shortcuts
 
Shortcut Key Descriptions
Ctrl+F Find
Ctrl+H Replace
F3 / Shift+F3 Find next/previous
Alt+Enter Select all occurences of Find match
Ctrl+D Add selection to next Find match
Ctrl+K Ctrl+D Move last selection to next Find match
Alt+C / R / W Toggle case-sensitive / regex / whole word
 
Multi-cursor and selection Shortcuts
 
Shortcut Key Descriptions
Alt+Click Insert cursor
Ctrl+Alt+ ↑ / ↓ Insert cursor above / below
Ctrl+U Undo last cursor operation
Shift+Alt+I Insert cursor at end of each line selected
Ctrl+I Select current line
Ctrl+Shift+L Select all occurrences of the current selection
Ctrl+F2 Select all occurrences of the current word
Shift+Alt+→ Expand selection
Shift+Alt+← Shrink selection
 
Editor Management Shortcuts
 
Shortcut Key Descriptions
Ctrl+F4, Ctrl+W Close editor
Ctrl+K F Close folder
Ctrl+\ Split editor
Ctrl+ 1 / 2 / 3 Focus into 1 st, 2nd or 3rd editor group
Ctrl+K Ctrl+ ←/→ Focus into previous/next editor group
Ctrl+Shift+PgUp / PgDn Move editor left/right
Ctrl+K ← / → Move active editor group
 
File Management Shortcuts
 
Shortcut Key Descriptions
Ctrl+N New File
Ctrl+O Open File
Ctrl+S Save
Ctrl+Shift+S Save
Ctrl+K S Save All
Ctrl+F4 Close
Ctrl+K Ctrl+W Close All
Ctrl+Shift+T Reopen closed editor
Ctrl+K Enter Keep preview mode editor open
Ctrl+Tab Open next
Ctrl+Shift+Tab Open previous
Ctrl+K P Copy path of an active file
Ctrl+K R Reveal active file in Explorer
Ctrl+K O Show active file in a new window/instance
 
Debug Shortcuts
 
Shortcut Key Descriptions
F9 Toggle breakpoint
F5 Start/Continue
Shift+F5 Stop
F11 / Shift+F11 Step into/out
F10 Step over
Ctrl+K Ctrl+I Show hover
 
Integrated Terminal Shortcuts 
 
Shortcut Key Descriptions
Ctrl+` Show integrated terminal
Ctrl+Shift+` Create a new terminal
Ctrl+C Copy selection
Ctrl+V Paste into an active terminal
Ctrl+↑ / ↓ Scroll up/down
Shift+PgUp / PgDn Scroll page up/down
Ctrl+Home / End Scroll to the top/bottom

5 Evergreen goals To guide technology organization

These 5 evergreen goals are a useful way to help technology organizations of all sizes make decisions, categorize work, allocate resources, and spur innovation and productivity without interfering with team-specific, time-boxed goals. Whether you’re leading through change or focusing your team, these evergreen goals (or your variations of them) might just be what you need to bring foundational consistency to your technology organization without slowing them down. Here’s our set of evergreen goals.

1. Reduce  Complexity

Some systems might be complex because the problems they address are complicated. Perhaps the complexity is justified. That said, it’s startling how much complexity is created unintentionally. This evergreen goal is focused on reducing accidental or unintentional complexity. Sometimes it’s created because of expediency, but often it’s the result of architecture that does not evolve properly. The end result is the same, however. You probably see this in some of your own systems as they become increasingly difficult to fix or improve in a timely manner without causing problems in other areas. Unintentionally complex systems are also difficult to secure, scale, move, and recover. I’ve seen this at startups as well as at long standing companies like Morningstar with lengthy histories of product development, acquisitions, and integration. This goal is not only about technology but is also about reducing complexity in the processes that drive how we we plan, work together, communicate, and hire.

2. Improve Product Completeness

Technology teams often cut corners in order to deliver promised functionality on schedule. Regardless of why or how that happens, it does. The purpose of this evergreen goal is to encourage teams to always think intentionally about product completeness. We challenge our teams to continually find ways to improve security, scalability, and resilience, for example, and not just ways to deliver new functionality. Completeness work is often very underappreciated until something terrible happens. Don’t wait until you experience a data breach, extended downtime, or an inability to scale before you think about product completeness. Be pragmatic, but don’t be foolish.

3. Increase  Uptime

Delivering a product (internal or otherwise) is one thing, but keeping it up and running is an operational challenge that is often an afterthought in many organizations. The purpose of this evergreen goal is to encourage teams to think about monitoring, alerting, logging, incident response, recoverability, and automation. This isn’t just about technology. It’s also about ensuring that operations processes are efficient, modern, updated, and focused on the customer. Identify and correct problems before your customers report them. They expect that from you.

4. Own Less Infrastructure

In this modern age of high quality public cloud infrastructure, it makes little sense for most companies to run their own data centers for most of their workloads. It’s rarely a business differentiator anymore. Obviously, this evergreen goal might only apply to you if you’re still running your own data centers, but also consider other infrastructure you might own. Do you have your own call center equipment, for example? It might be worth rethinking that. At Morningstar, we are in the middle of a multi-year cloud transformation and this goal is particularly important to us. The purpose of this goal is to encourage teams to find ways to reduce current infrastructure footprints so that we can continue to draw down our dependence on the infrastructure that we own and maintain.

5. Maximize Talent

The technology landscape is changing so quickly and access to rich web services is abundant. A quick look at any major cloud service provider reveals that they’ve moved well beyond infrastructure services into services that spur innovation and increase productivity. Look at all the services related to machine learning, for example. Hopefully, you’ve hired people not just for what they already know but also for their aptitude and desire for continuing education. The tendency for many companies is to hire from the outside without first considering modernizing the skill sets of people they already have in-house. The modern workforce expects companies to invest in professional development, so this evergreen goal to maximize talent is a constant reminder to do that. It benefits individuals, teams, and the overall business to re-skill in-house talent.

Takeaways

Remember though, that you cannot immediately change culture. You have to nurture and evolve it. Installing and promoting these evergreen goals is often like creating a new habit or lifestyle change. It requires commitment, persistence, repetition, and encouragement. Use the terminology and concepts in meetings, conversations, and presentations, and encourage others to do the same. Make the effort inclusive, sustained, and intentional. The overall purpose for these evergreen goals is to remove friction from your technology organization in order to spur innovation and increase productivity. Sometimes simple measures like these yield the most impressive results.

Useful Git Commands

Git is a most widely used and powerful version control system for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for source code management in software development, but it can be used to keep track of changes in any set of files.

Git was developed by Linus Torvalds in 2005 as a distributed open source software version control software and of course, it is free to use. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows.

While other version control systems e.g. CVS, SVN keeps most of their data like commit logs on the central server, every git repository on every computer is a full-fledged repository with complete history and full version tracking abilities, independent of network access or a central server.

However, almost all IDEs support git out of the box and we do not require to submit the git commands manually but it is always good to understand these commands. Below is a list of some git commands to work efficiently with Git.

Git Help

The most useful command in git is git help which provides us with all the help we require. If we type git helpin terminal, we will get:

 
usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]
 
           [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
 
           [-p | --paginate | --no-pager] [--no-replace-objects] [--bare]
 
           [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
 
           <command> [<args>]
 
 
These are common Git commands used in various situations:
 
 
start a working area (see also: git help tutorial)
 
   clone      Clone a repository into a new directory
 
   init       Create an empty Git repository or reinitialize an existing one
 
 
work on the current change (see also: git help everyday)
 
   add        Add file contents to the index
 
   mv         Move or rename a file, a directory, or a symlink
 
   reset      Reset current HEAD to the specified state
 
   rm         Remove files from the working tree and from the index
 
 
examine the history and state (see also: git help revisions)
 
   bisect     Use binary search to find the commit that introduced a bug
 
   grep       Print lines matching a pattern
 
   log        Show commit logs
 
   show       Show various types of objects
 
   status     Show the working tree status
 
 
grow, mark and tweak your common history
 
   branch     List, create, or delete branches
 
   checkout   Switch branches or restore working tree files
 
   commit     Record changes to the repository
 
   diff       Show changes between commits, commit and working tree, etc
 
   merge      Join two or more development histories together
 
   rebase     Reapply commits on top of another base tip
 
   tag        Create, list, delete or verify a tag object signed with GPG
 
 
collaborate (see also: git help workflows)
 
   fetch      Download objects and refs from another repository
 
   pull       Fetch from and integrate with another repository or a local branch
 
   push       Update remote refs along with associated objects
 
 
'git help -a' and 'git help -g' list available sub-commands and some concept guides.
 
See 'git help <command>' or 'git help <concept>' to read about a specific sub-command or concept.
 


Command git help -a will give us a complete list of git commands:

 
Available git commands in '/usr/local/git/libexec/git-core'
 
  add                     gc                      receive-pack
 
  add--interactive        get-tar-commit-id       reflog
 
  am                      grep                    remote
 
  annotate                gui                     remote-ext
 
  apply                   gui--askpass            remote-fd
 
  archimport              gui--askyesno           remote-ftp
 
  archive                 gui.tcl                 remote-ftps
 
  askpass                 hash-object             remote-http
 
  bisect                  help                    remote-https
 
  bisect--helper          http-backend            repack
 
  blame                   http-fetch              replace
 
  branch                  http-push               request-pull
 
  bundle                  imap-send               rerere
 
  cat-file                index-pack              reset
 
  check-attr              init                    rev-list
 
  check-ignore            init-db                 rev-parse
 
  check-mailmap           instaweb                revert
 
  check-ref-format        interpret-trailers      rm
 
  checkout                log                     send-email
 
  checkout-index          ls-files                send-pack
 
  cherry                  ls-remote               sh-i18n--envsubst
 
  cherry-pick             ls-tree                 shortlog
 
  citool                  mailinfo                show
 
  clean                   mailsplit               show-branch
 
  clone                   merge                   show-index
 
  column                  merge-base              show-ref
 
  commit                  merge-file              stage
 
  commit-tree             merge-index             stash
 
  config                  merge-octopus           status
 
  count-objects           merge-one-file          stripspace
 
  credential              merge-ours              submodule
 
  credential-manager      merge-recursive         submodule--helper
 
  credential-store        merge-resolve           subtree
 
  credential-wincred      merge-subtree           svn
 
  cvsexportcommit         merge-tree              symbolic-ref
 
  cvsimport               mergetool               tag
 
  daemon                  mktag                   unpack-file
 
  describe                mktree                  unpack-objects
 
  diff                    mv                      update
 
  diff-files              name-rev                update-git-for-windows
 
  diff-index              notes                   update-index
 
  diff-tree               p4                      update-ref
 
  difftool                pack-objects            update-server-info
 
  difftool--helper        pack-redundant          upload-archive
 
  fast-export             pack-refs               upload-pack
 
  fast-import             patch-id                var
 
  fetch                   prune                   verify-commit
 
  fetch-pack              prune-packed            verify-pack
 
  filter-branch           pull                    verify-tag
 
  fmt-merge-msg           push                    web--browse
 
  for-each-ref            quiltimport             whatchanged
 
  format-patch            read-tree               worktree
 
  fsck                    rebase                  write-tree
 
  fsck-objects            rebase--helper
 


And command git help -g will give us a list git concepts which git think is good for us:

 
The common Git guides are:
 
 
   attributes   Defining attributes per path
 
   everyday     Everyday Git With 20 Commands Or So
 
   glossary     A Git glossary
 
   ignore       Specifies intentionally untracked files to ignore
 
   modules      Defining submodule properties
 
   revisions    Specifying revisions and ranges for Git
 
   tutorial     A tutorial introduction to Git (for version 1.5.1 or newer)
 
   workflows    An overview of recommended workflows with Git
 


We can use git help <command> or git help <concept> command to know more about a specific command or concept.

Git Configuration

Image title

 

Git Commit and Push

Image title

 

Git Checkout And Pull

Image title

 

Git Branch

Image title

 

Git Cleaning

Image title

 

Other Git Commands

Image title

Lazy Loading Of Modules In Angular 7

Introduction

 
Lazy Loading is the technique of loading the module or data on demand. It helps us to better the application performance and reduce the initial bundle size of our files. The initial page loads faster and we can also split the application into the logic chunks which can be loaded on demand.
 
Prerequisites
  • Basic knowledge of Angular 2+ version.
  • Basic knowledge of Routing.  

The step-by-step process

 
 
Let us now understand the steps involved in the demo application.
 
Step 1
 
Open the command prompt and write the command for creating a new Angular application. We have an option for routing the module by default.
 
ng new lazyloadingApp
 
Lazy Loading Module in Angular 7 
 
Step 2
 
The application is created successfully. Now, navigate to the application folder and open the application in VS Code.
 
 Lazy Loading Module in Angular 7
 
Step 3
 
Now, create a new routing module file using the given command. Here --flat helps to create only TypeScript file without containing our own folder. 
 
ng generate module app-routing --flat or ng g m app-routing --flat
 
Step 4
 
Now, we are creating two components - home and about - using the below command for demonstration. You can create the components with any name as you like. Here, we are using --module for auto import components to app-routing module.
 
ng g c home --module app-routing
ng g c about--module app-routing 
 
Step 5
 
Now, create one more module file for loading on demand. Let us say the name lazy and create one component file with the named employee using the below command.
 
ng g m Lazy
ng g c Lazy/employee --flat
 
Step 6
 
If the above command creates files successfully, then open the app.routing.ts file and import Routes and RouterModule from the @angular/router.
 
Add one constant for defining your routes with the path and component. Here, we used loadChildren load module on the user's demand.
 
Use RouterModule.forRoot with our routes array.
 
Now, in your app-routing.module.ts file, add the following code snippet.
 
  1. import { NgModule } from '@angular/core';  
  2. import { CommonModule } from '@angular/common';  
  3. import { HomeComponent } from './home/home.component';  
  4. import { AboutComponent } from './about/about.component';  
  5. import { Routes ,RouterModule} from '@angular/router';  
  6.   
  7. const routes :Routes =  
  8. [  
  9.   {  
  10.     path:'',component:HomeComponent  
  11.   },  
  12.   {  
  13.     path:'home',component:HomeComponent  
  14.   },  
  15.   {  
  16.     path:'about',component:AboutComponent  
  17.   },  
  18.     {  
  19.     path:'lazyloading',   loadChildren : './lazy/lazy.module#LazyModule'  
  20.   },  
  21. ]  
  22.   
  23. @NgModule({  
  24.   declarations: [HomeComponent, AboutComponent],  
  25.   imports: [  
  26.     CommonModule,  
  27.     RouterModule.forRoot(routes),  
  28.   ],  
  29.   exports: [RouterModule]  
  30. })  
  31. export class AppRoutingModule { }  
Step 7
 
Open the lazy.module.ts file and define components in routes. Then, use RouterModule.forchild with your child routes array.
 
The following code snippet can be used for lazy.module.ts file.
 
  1. import { NgModule } from '@angular/core';  
  2. import { CommonModule } from '@angular/common';  
  3.   
  4. import { Routes ,RouterModule} from '@angular/router';  
  5. import { EmployeeComponent } from './employee.component';  
  6.   
  7.   
  8. const routes :Routes =  
  9. [  
  10.   {  
  11.     path:'',component:EmployeeComponent  
  12.   }  
  13. ]  
  14. @NgModule({  
  15.   declarations: [EmployeeComponent],  
  16.   imports: [  
  17.     CommonModule,  
  18.     RouterModule.forChild(routes)  
  19.   ]  
  20. })  
  21. export class LazyModule { }  
Step 8
 
Open the app.module.ts file and here, import AppRoutingModule. Your code will look like below.
 
  1. import { BrowserModule } from '@angular/platform-browser';  
  2. import { NgModule } from '@angular/core';  
  3.   
  4. import { AppComponent } from './app.component';  
  5.   
  6. import {AppRoutingModule} from './app-routing.module'  
  7. @NgModule({  
  8.   declarations: [  
  9.     AppComponent,  
  10.       
  11.   ],  
  12.   imports: [  
  13.     BrowserModule,  
  14.     AppRoutingModule  
  15.   ],  
  16.   providers: [],  
  17.   bootstrap: [AppComponent]  
  18. })  
  19. export class AppModule { }  

Step 9

Now, open the app.component.html file. Here, we need to define the routerLink for navigating the links and using router-outlet tag for loading the HTML templete. 
 
  1.  
  2. <div>
  3. <a routerLink="/home" >home</a>  |     
  4. <a routerLink="/about" >about</a>  |     
  5. <a routerLink="/employeelist" >employee list</a>    
  6. <router-outlet></router-outlet> 
  7. </div>
  8.    
Step 10
 
Now, run the application using the following commands to open in the Chrome browser, enter http://localhost:4200.
Open Developers tool, go to the Network tab.
Here, you can see that when you click the home or about page, they load the initial bundle files and when clicked on employee list link, the lazymodule file is loaded.
Given below is the output image. The first one is an initial load image and the second is lazy module load.  
 
ng serve 
 
Initial loading - 
 
 Lazy Loading Module in Angular 7
Lazy bundles loaded -
 
Lazy Loading Module in Angular 7 
 
 
I have attached the .rar file of this demonstration. If you want the application code, download it. Use the below command for installing node modules.
 
npm install  
  

Summary


In this article, we learned lazy loading of modules in Angular. Thank you for reading. If you have any questions/feedback, please write in the comments section.

React vs. Angular Compared: Which One Suits Your Project Better?

In the programming world, Angular and React are among the most popular JavaScript frameworks for front-end developers. Moreover, these two – together with Node.js – made it to the top three frameworks used by all software engineers on all programming languages, according to Stack Overflow Developer Survey 2018.

Both of these front-end frameworks are close to equal in popularity, have similar architectures, and are based on JavaScript. So what’s the difference? In this article, we’ll compare React and Angular. Let us start by looking at the frameworks’ general characteristics in the next paragraph. And if you are looking for other React and Angular comparisons, you can review our articles on cross-platform mobile frameworks (including React Native), or comparison of Angular with other front-end frameworks.

Angular and React.js: A Brief Description

Angular is a front-end framework powered by Google and is compatible with most of the common code editors. It’s a part of the MEAN stack, a free open-source JavaScript-centered toolset for building dynamic websites and web applications. It consists of the following components: MongoDB (a NoSQL database), Express.js (a web application framework), Angular or AngularJS (a front-end framework), and Node.js (a server platform).

The Angular framework allows developers to create dynamic, single-page web applications (SPAs). When Angular was first released, its main benefit was its ability to turn HTML-based documents into dynamic content. In this article, we focus on the newer versions of Angular, commonly referred to as Angular 2+ to address its distinction from AngularJS. Angular is used by Forbes, WhatsApp, Instagram, healthcare.gov, HBO, Nike, and more.

React.js is an open source JavaScript library created by Facebook in 2011 for building dynamic user interfaces. React is based on JavaScript and JSX, a PHP extension developed by Facebook, that allows for the creation of reusable HTML elements for front-end development. React has React Native, a separate cross-platform framework for mobile development. We provide an in-depth review of both React.js and  React Native in our related article linked above. React is used by Netflix, PayPal, Uber, Twitter, Udemy, Reddit, Airbnb, Walmart, and more.

Toolset: Framework vs. Library

The framework ecosystem defines how seamless the engineering experience will be. Here, we’ll look at the main tools commonly used with Angular and React. First of all, React is not really a framework, it’s a library. It requires multiple integrations with additional tools and libraries. With Angular you already have everything to start building an app.

React vs. Angular

React and Angular in a nutshell

Angular

Angular comes with many features out of the box:

  • RxJS is a library for asynchronous programming that decreases resource consumption by setting multiple channels of data exchange. The main advantage of RxJS is that it allows for simultaneous handling of events independently. But the problem is that while RxJS can operate with many frameworks, you have to learn the library to fully utilize Angular.

  • Angular CLI is a powerful command-line interface that assists in creating apps, adding files, testing, debugging, and deployment.

  • Dependency injection - The framework decouples components from dependencies to run them in parallel and alter dependencies without reconfiguring components.

  • Ivy renderer - Ivy is the new generation of the Angular rendering engine that significantly increases performance.

  • Angular Universal is a technology for server-side rendering, which allows for rapid rendering of the first app page or displaying apps on devices that may lack resources for browser-side rendering, like mobile devices.

  • AptanaWebStormSublime TextVisual Studio Code are code editors commonly used with Angular.

  • JasmineKarma, and Protractor are the tools for end-to-end testing and debugging in a browser.

React

React requires multiple integrations and supporting tools to run.

  • Redux is a state container, which accelerates the work of React in large applications. It manages components in applications with many dynamic elements and is also used for rendering. Additionally, React works with a wider Redux toolset, which includes Reselect, a selector library for Redux, and the Redux DevTools Profiler Monitor.

  • Babel is a transcompiler that converts JSX into JavaScript for the application to be understood by browsers.

  • Webpack - As all components are written in different files, there’s a need to bundle them for better management. Webpack is considered a standard code bundler.

  • React Router - The Router is a standard URL routing library commonly used with React.

  • Similar to Angular, you’re not limited in terms of code choice. The most common editors are Visual Studio Code, Atom, and Sublime Text.

  • Unlike in Angular, in React you can’t test the whole app with a single tool. You must use separate tools for different types of testing. React is compatible with the following tools:

This toolset is also supplied by Reselect DevTools for debugging and visualization and React Extension for Chrome React Developer Tools and React Developer Tools for Firefox and React Sight that visualizes state and prop trees.

Generally, both tools come with robust ecosystems and the user gets to decide which is better. While React is generally easier to grasp, it will require multiple integrations like Redux to fully leverage its capacities.

Component-Based Architecture: Reusable and Maintainable Components With Both Tools

Both frameworks have component-based architectures. That means that an app consists of modular, cohesive, and reusable components that are combined to build user interfaces. Component-based architecture is considered to be more maintainable than other architectures used in web development. It speeds up development by creating individual components that let developers adjust and scale applications with a low time to market.

Code: TypeScript vs. JavaScript and JSX

Angular uses theTypeScript language (but you can also use JavaScript if needed). TypeScript is a superset of JavaScript fit for larger projects. It’s more compact and allows for spotting mistakes in typing. Other advantages of TypeScript include better navigation, autocompletion, and faster code refactoring. Being more compact, scalable, and clean, TypeScript is perfect for large projects of enterprise scale.

React uses JavaScript ES6+ and JSX script. JSX is a syntax extension for JavaScript used to simplify UI coding, making JavaScript code look like HTML. The use of JSX visually simplifies code which allows for detecting errors and protecting code from injections. JSX is also used for browser compilation via Babel, a compiler that translates the code into the format that a web browser can read. JSX syntax performs almost the same functions as TypeScript, but some developers find it too complicated to learn.

DOM: Real vs. Virtual

Document Object Model (DOM) is a programming interface for HTML, XHTML, or XML documents, organized in the form of a tree that enables scripts to dynamically interact with the content and structure of a web document and update them.

There are two types of DOMs: virtual and real. Traditional or real DOM updates the whole tree structure, even if the changes take place in one element, while the virtual DOM is a representation mapped to a real DOM that tracks changes and updates only specific elements without affecting the other parts of the whole tree.

DOM

 

 

 

 

 

 

 

 

The HTML DOM tree of objects 
Source: W3Schools

React uses a virtual DOM, while Angular operates on a real DOM and uses change detection to find which components need updates.

While the virtual DOM is considered to be faster than real DOM manipulations, the current implementations of change detection in Angular make both approaches comparable in terms of performance.

Data Binding: Two-Way vs. Downward (One-Way)

Data binding is the process of synchronizing data between the model (business logic) and the view (UI). There are two basic implementations of data binding: one-directional and two-directional. The difference between one- and two-way data binding lies in the process of model-view updates.

Data binding

 

One- and two-way data binding

Two-way data binding in Angular is similar to the Model-View-Controller architecture, where the Model and the View are synchronized, so changing data impacts the view and changing the view triggers changes in the data.

React uses one-way, or downward, data binding. One-way data flow doesn’t allow child elements to affect the parent elements when updated, ensuring that only approved components change. This type of data binding makes the code more stable, but requires additional work to synchronize the model and view. Also, it takes more time to configure updates in parent components triggered by changes in child components.

One-way data binding in React is generally more predictable, making the code more stable and debugging easier. However, traditional two-way data binding in Angular is simpler to work with.

App Size and Performance: Angular Has a Slight Advantage

AngularJS is famous for its low performance when you deal with complex and dynamic applications. Due to the virtual DOM, React apps perform faster than AngularJS apps of the same size.

However, newer versions of Angular are slightly faster compared to React and Redux, according to Jacek Schae’s research at freeCodeCamp.org. Also, Angular has a smaller app size compared to React with Redux in the same research. Its transfer size is 129 KB, while React + Redux is 193 KB.

speed tests

Speedtest (ms)
Source: Freecodecamp

The recent updates to Angular made the competition between the two even tenser as Angular no longer falls short in terms of speed or app size.

Pre-Built UI Design Elements: Angular Material vs. Community-Backed Components

Angular. The Material Design language is increasingly popular in web applications. So, some engineers may benefit from having the Material toolset out of the box. Angular has pre-built material design components. Angular Material has a range of them that implement common interaction patterns: form controls, navigation, layout, buttons and indicators, pop-ups and modules, and data tables. The presence of pre-built elements makes configuring UIs much faster.

React. On the other hand, most of the UI tools for React come from its community. Currently, the UI components section on the React portal provides a wide selection of free components and some paid ones. Using material design with React demands slightly more effort: you must install the Material-UI Library and dependencies to build it. Additionally, you can check for Bootstrap components built with React and other packages with UI components and toolsets.

Mobile Portability: NativeScript vs. React Native

Both frameworks come with additional tools that allow engineers to port the existing web applications to mobile apps. We’ve provided a deep analysis and comparison of both NativeScript (Angular) and React Native. Let’s briefly recap the main points.

NativeScript. NativeScript is a cross-platform mobile framework that uses TypeScript as the core language. The user interface is built with XML and CSS. The tool allows for sharing about 90 percent of code across iOS and Android, porting the business logic from web apps and using the same skill set when working with UIs. The philosophy behind NativeScript is to write a single UI for mobile and slightly adjust it for each platform if needed. Unlike hybrid cross-platform solutions that use WebView rendering, the framework runs apps in JavaScript virtual machines and directly connects to native mobile APIs which guarantees high performance comparable to native apps.

React Native. The JavaScript framework is a cross-platform implementation for mobile apps that also enables portability from web. React Native takes a slightly different approach compared to NativeScript: RN’s community is encouraged to write individual UIs for different platforms and adhere to the "learn once, write everywhere" approach. Thus, the estimates of code sharing are around 70 percent. React Native also boasts native API rendering like NativeScript but requires building additional bridge API layers to connect the JavaScript runtime with native controllers.

Generally, both frameworks are a great choice if you need to run both web and mobile apps with the same business logic. While NativeScript is more focused on code sharing and reducing time-to-market, the ideas behind React Native suggest longer development terms but are eventually closer to a native look and feel.

Documentation and Vendor Support: Insufficient Documentation Offset by Large Communities

Angular was created by Google and the company keeps developing the Angular ecosystem. Since January 2018, Google has provided the framework with LTS (Long-Term Support) that focuses on bug fixing and active improvements. Despite the fast development of the framework, the documentation updates aren’t so fast. To make the Angular developer’s life easier, there’s an interactive service that allows you to define the current version of the framework and the update target to get a checklist of update activities.

Angular updates

Unfortunately, the service doesn’t help with transitioning legacy AngularJS applications to Angular 2+ as there’s no simple way to do this

AngularJS documentation and tutorials are still praised by the developers as they provide broader coverage than that of Angular 2+. Considering that AngularJS is outdated, this is hardly a benefit. Some developers also express concerns about the pace of SLI documentation updates.

The React community is experiencing a similar documentation problem. When working with React, you have to prepare yourself for changes and constant learning. The React environment and the ways of operating it updates quite often. React has some documentation for the latest versions, but keeping up with all changes and integrations isn’t a simple task. However, this problem is somewhat neutralized by the community support. React has a large pool of developers ready to share their knowledge on thematic forums.

Learning Curve: Much Steeper for Angular

The learning curve of Angular is considered to be much steeper than of React. Angular is a complex and verbose framework with many ways to solve a single problem. It has intricate component management that requires many repetitive actions.

As we mentioned above, the framework is constantly under development, so the engineers have to adapt to these changes. Another problem of Angular 2+ versions is the use of TypeScript and RxJS. While TypeScript is close to JavaScript, it still takes some time to learn. RxJS will also require much effort to wrap your mind around.

While React also requires constant learning due to frequent updates, it’s generally friendlier to newcomers and doesn’t require much time to learn if you’re already good with JavaScript. Currently, the main learning curve problem with React is the Redux library. About 60 percent of applications built with React use it and eventually learning Redux is a must for a React engineer. Additionally, React comes with useful and practical tutorials for beginners.

Community and Acceptance: Both Are Widely Used and Accepted

React remains more popular than Angular on GitHub. It has 113,719 stars and 6,467 views, while Angular has only 41,978 and 3,267 stars and views. But according to the 2018 Stack Overflow Developer Survey, the number of developers working with Angular is slightly larger: 37.6 percent of users compared to 28.3 percent of React users. It’s worth mentioning that the survey covers both AngularJS and Angular 2+ engineers.

most used frameworks and tools of 2018

The most popular frameworks
Source: Stack Overflow

Angular is actively supported by Google. The company keeps developing the Angular ecosystem and since January 2018, it has provided the framework with LTS (Long-Term Support).

However, Angular also leads in a negative way. According to the same survey, 45.6 percent of developers consider it to be among the most dreaded frameworks. This negative feedback on Angular is probably impacted by the fact that many developers still use AngularJS, which has more problems than Angular 2+. But still, Angular’s community is larger.

The numbers are more optimistic for React. Just 30.6 percent of professional developers don’t want to work with it.

Which Framework Should You Choose?

The base idea behind Angular is to provide powerful support and a toolset for a holistic front-end development experience. Continuous updates and active support from Google hint that the framework isn’t going anywhere and the engineers behind it will keep on fighting to preserve the existing community and make developers and companies switch from AngularJS to a newer Angular 2+ with high performance and smaller app sizes. TypeScript increases the maintainability of code, which is becoming increasingly important as you reach enterprise-scale applications. But this comes with the price of a steep learning curve and a pool of developers churning towards React.

React gives a much more lightweight approach for developers to quickly hop on work without much learning. While the library doesn’t dictate the toolset and approaches, there are multiple instruments, like Redux, that you must learn in addition. Currently, React is comparable in terms of performance to Angular. These aspects make for broader developer appeal.

Originally published on AltexSoft Tech Blog "React vs. Angular Compared: Which One Suits Your Project Better?"

Cheap MLB Jerseys maillot de foot pas cher www.fotballdrakter.org