My secret of no merge conflict

PS: This post is for someone who has little knowledge of Git and GitHub.

Yellow People✌!!

I remember, when I was a junior dev and I didn’t know how to use Git, what approach to use, what kind of mindset I need to adopt. All these are really confusing when you start a fresh. The jargons, steps you should follow and so many more, everything confusing even after someone explains it. Don’t worry, you are not alone. We were all there at times.

What important lessons did I learn as a developer using Git? I really got confused between Git and GitHub. It sounds so similar when it’s not. It’s something like Java and JavaScript. Well, that’s another topic, for another blog😜.

This blog post will be an easier one, I promise. It’ll only focus on how to use Git and GitHub, so that you get no/less merge conflict. I’ll discuss some of my simple steps and how to check-in your code without any errors.

Let’s look at some important Git rituals before check-in in any new commit.

Before the new changes:
  1. Always, I say “Always” have the latest version of the branch you’ll add new code to. Always pull the branch before doing any change. (Must follow)
  2. If the branch you are using is common, then try (if possible) to know if anyone else is working on the same branch. If yes, you need to co-ordinate with that person.
After the new changes:
  1. Keep a copy of your new changes until your code is checked-in successfully.
  2. Do a pull on the branch again. Just to make sure you have the latest version.
  3. Compare your new changes (without ignoring the spaces) with the latest change in the branch. Just to check if you didn’t add, modify, or remove anything that was not needed.
  4. Remove anything that was not required for the feature/bug you’re working on (even spaces). Spaces can also creates merge conflict. I’ll discuss later how spaces can create a big problem.
  1. Always review your commits before you send them, you will be amazed by how many bugs you will catch of yourself.
  2. Commit the changes with some good commit messages like using the ticket number/issue numbers or specify WHY you did it. Using a ticket number in commit will always be helpful for future tracking.
What to do after your commit is done?

Now, starts the main step of moving your changes to the actual branch i.e., creating a pull request. This is where the merge conflict arises. Make sure to verify your branch names before merging a PR because sometimes GitHub page get refreshed and it changes the target branch to the “main” (happen a lot).

Some point to keep in mind before merging:

  1. Check the “Able to Merge” checklist before creating the PR.
  2. Re-verify the source and target branch.
  3. Check the number of files you changed.
  4. Compare the changes of all the files. Even slightest of change can cause merge conflict.

Keeping these points in mind and making a habit out of them will help you in long run. You better waste 10 minutes of your time than hours of multiple colleagues. These really worked for me, and they might do good for you as well.

What’s the issue with whitespaces in merge conflict?

Whitespace has ended up being a horrible pain for me while using Git. It seems to heighten your chances of getting conflicts. Why? The merge conflict is given as textual markers in the file by Git. Whitespace parsing is challenging due to the various coding and scripting languages. So, whitespaces end up causing merge conflict.

Though, it is possible to remove whitespaces manually using some Git commands. But that also gets cumbersome sometime.

That’s all folks…!! These were the simple steps to no/less merge conflict. You can always have your own steps for dealing with merge conflicts. This is not something carved in stone that you must follow. But it’ll do you good when you follow these steps and get saved from the horror of resolving merge conflicts.

Please do add your comments and let me know your life saving tips for using Git and GitHub.

Pomodoro Technique: Why should we use it?

Yellow People✌!!

Who are we? Developers. What do we do? Code.

Coding is fun and once you get into it you can’t get out; we all know that. We tend to work continuously and forget that we have a life to live, I do that most of the time. Developers often hear “Dude, get a life. Do something else other than code”. Honestly, I have heard it many times.

You get all energetic when it comes to developing something new or when there is a bug that doesn’t seem to get resolved even after working on it again and again. What do we do then? Generally, we keep working on it until it is resolved or developed.

Is it a good thing to continuously work without taking breaks in between?

NO!! It’s a big no. You should not do that. Why? The answer is simple: Brain needs rest, your body needs rest. Period. The attention span of a human is 45 minutes, which means we cannot fully concentrate on the task after the time limit is reached. Yes, you can continue, you might have a good concentration level but is it worth taking a toll on your health?

What will happen when you don’t take breaks while working? Some simple effects of working continuously which we tend to ignore:

  • Fatigue
  • Eyes, back and neck pain
  • Lack of concentration
  • Reduced productivity
  • Increased Stress
  • Procrastination
  • Increased screen time

There are many more effects of working continuously which happens on daily basis. The main question is: How do we solve it? How to restrict ourselves from continuous working.

The answer to this is simple: Take breaks. 😂 Try implementing time management method called The Pomodoro Technique. This technique was developed by Francesco Cirillo in the late 1980s. (Wiki)

Wait, it was developed in 1980s but why are we discussing it now? The reason is coding is addictive and time-consuming process and developers are getting affected by working continuously. We need to understand the fact that this habit is affecting us adversely in various ways.

Pomodoro technique is very much like Agile methodology except for the fact that it is implemented by us under our management. Simply, pomodoro technique involves breaking down the task, assigning each task (here task is called as pomodoro) strict timeline; maximum time limit is 25 minutes, work on the task until the timer is up, take a 5-10-minute break, get away from screens during breaks, come back refreshed and start again with new pomodoro, take long break of about 20-25 minutes after every four pomodoro. The process works to train your brain to focus and helps you make progress despite the distractions.

This looks little difficult to implement, isn’t it?

It is difficult to implement. How do we force our self to take break? You must, for the sake of your mental as well as physical health. We all know this saying “Health is wealth”. Is there anything more important than your health? Nope, never and never going to be.

Why don’t we start with small steps? Make a small oath to yourself: I will spend this amount of time on this task, and I will not interrupt myself. I can do it, I know that. I’ll take a break when this task is done, at least for 5 minutes.

Start this for a day in a week and then two days in a week and so on. You’ll see the changes and I’m telling you its worth in the end. All of this.

What changes it’ll bring in your daily routine? Here are some:

  • More concentration, less distractions
  • Less procrastination
  • Time management at its best
  • Increased productivity
  • Refreshed for every new task
  • Relaxation for your eyes, brain, neck and back
  • Increased motivation
  • Promotes mental agility, focus and flow

Do we need more reasons to add this technique in our life? One of the best things about the Pomodoro Technique is that it’s free and depends on self-management. Implementing the pomodoro technique is simple and requires minimal setup. That is why it works wonders for your productivity and health. Grab your pen and paper, plan your day, then start your timer for 25 minutes or less. Easy!!!

Let me know your views on this in the comment section.

Virtual Vs Augmented Reality

Virtual and Augmented Reality are two of the most trending technologies nowadays. Both are growing and showing new applications and solutions they can provide. AR and VR today are two distinct things – think more like cousins than twins. Although, they are well-known in every field but the difference between them is not much clear. Let’s discuss what they actually are and how do they differ from each other.

What is VR and AR?

Virtual reality (VR) implies a complete immersion experience that shuts out the physical world. It completely takes over your vision to give you the impression that you’re somewhere else. A VR experience is limiting in that your focus is entirely on the world. For the most part, you can’t look around your world, but you can’t look away from it. You’re trapped in it until you remove the headset or shut off the app. Some example of VR is HTC Vive Pro Eye, the Oculus Rift S and many more.

Augmented reality (AR) adds digital elements to a live view often by using the camera on a smartphone. Virtual reality replaces your vision, augmented reality adds to it. AR devices are transparent, letting you see everything in front of you as if you are wearing a weak pair of sunglasses. The technology is designed for completely free movement while projecting images over whatever you look at, it can be floor, wall etc. AR displays can offer something as simple as a data overlay that shows the time, to something as complicated as holograms floating in the middle of a room. Examples of augmented reality experiences include Snapchat lenses and the game Pokémon Go.

Science behind AR and VR

Virtual Reality – The technology is a computer-generated simulation of a 3D-environment that you can immerse yourself in, navigate around, and seemingly interact with via special hardware, like a chunky headset with handheld sensors. For virtual reality to work there needs to be two things: hardware and software. The hardware powers the VR experience by giving you a display to look at, for instance, while the experience itself is nothing but software, such as a video game that puts you in the middle of the action. With this combo, you strap on a VR headset, load a VR app, and jump into a virtual world. In the headset, the LCD or OLED panels inside are refracted by the lenses to completely fill your field of vision with whatever is being displayed by the VR software in the headset.

Augmented Reality – This let you experience a computer-generated simulation of either a 3D or 2D environment, and all this is superimposed onto your actual view of the real world, creating a composite view. AR can add contextual layers of information in real-time as well, so you can see suggested restaurants nearby, for instance, while walking down the street as 3D aliens run past you. Like VR, AR needs both hardware and software to work. There must be something that powers and displays the augmented reality, while the augmented reality itself is software or a game or an app designed by a developer. But the main thing to realize about AR is that it enables you to interact with the real world while simultaneously experiencing something totally augmented.

VR vs. AR

VR replaces reality, taking you somewhere else. AR adds to reality, projecting information on top of what you’re already seeing. In VR, the user no longer perceives the real environment. User can only experience the digital 3D world with aids such as VR glasses. Whereas in AR, the user still sees the real world, but receives additional information displayed from the AR device. AR can be in both 2D or 3D form. The virtual world can be seen, heard, and felt. In VR, there are 360-degree images, 360-degree videos, and fully created 3D worlds meanwhile in AR, there are texts, images, animations, videos, and static or moving 3D objects.

The VR device don’t just detect the direction in which you’re facing, but any movement you make in those directions wherein the AR device will only display the content on top of whatever the camera is looking at with no movement detection.

Augmented and virtual reality have one big thing in common. They both have the remarkable ability to alter our perception of the world. Where they differ, is the perception of our presence.

Augmented and virtual technologies are quickly developing. we’ll be seeing a lot more from both augmented reality and virtual reality soon. Devices will be cheaper, experiences more accessible and with any luck this new technology will continue to improve all our lives.

Protocol Buffer

Optimization is one of the main concern for any application and making that application faster and simpler with no/minimal performance impact at all, is lot to do. As developer, I know how hard it can be sometimes. With optimization, serialization and deserialization comes handy and they can be one of the biggest challenges in terms of performance.

Ever wondered how FAANG companies and many more do optimization. Ever thought how the optimization happens at Google or Facebook or Amazon. Do they use XML or JSON or any other formats. Yes, they do use different formats, each of them. Amazon use an optimization format called Ion, Facebook uses Apache Thrift and Google uses Protocol Buffer a.k.a as Protobuf.

In this post, I will mainly discuss on Protobuf which was a data interchanging format widely used by Google internally but now it is under an open source license.

Protobuf is formally known as Protocol Buffers. By definition, Protocol buffers are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. source: protocol buffers

Google mainly developed protobuf for internal use and for storing and interchanging all kinds of structured information. The main goal of protobuf is simplicity and performance. In other terms, it was designed to be smaller and faster than XML and JSON, which indeed is a benefit on performance.
The latest version of Protocol Buffers is v3.13.0.

How Protobuf works:

The main aim of protobuf is to make the process simpler and faster. I will talk on how Protobuf works generally. You can use any of the common languages supported by Google and they are C++, C#, Dart, Go, Python. Let’s discuss how Protobuf works.

There are mainly three steps in the process. They are:

  1. Create a file with .proto extension.
  2. Add a message for each data structure you want to serialize with the help of protocol buffer API of your chosen language.
  3. Compile the messages with protoc.

Messages are created in a .proto file and compiled with protoc. From this file, the compiler creates a class that implements automatic encoding and parsing of the data with an efficient binary format. This generated class provides getters and setters for the fields that make up a protocol buffer and takes care of the details of reading and writing the protocol buffer as a unit.
Importantly, the protocol buffer format supports the idea of extending the format over time in such a way that the code can still read data encoded with the old format.

Protobuf Vs JSON Vs XML

Protobuf
  • Uses binary message format
  • Easier to bind to objects
  • Faster at integer encoding
  • Guarantees type-safety
  • Dense binary, non-readable by humans
JSON
  • Text data format
  • Integers and floats can be slow to encode and decode
  • Not designed for numbers
  • Human readable
  • Mostly applicable when server side is JavaScript

XML
  • Text data format
  • Can be parsed without knowing schema
  • Good tooling support
  • More work to decode
  • Exchange data in strings and then parse them while retrieving using parser

Though Protobuf has lots of pros considering many areas but it also comes with major cons. Protocol Buffer’s learning curve is slightly high and it can be really difficult when there is less resources are available on it and that too with smaller community. The biggest and major drawback is protobuf is non-human readability unlike JSON and XML.

The performance of protobuf is exponentially greater than JSON and XML serialization. It is smaller, faster and requires less network bandwidth. Protobuf, XML or JSON all are equally great, it majorly depends on the situation you are in and what suits best for you.

Chaos Monkey : Introduction

In love with Netflix ❤ and it’s content and how a huge number of people are using it without any issues. Ever wondered how Netflix keep running all the time without any failures and termination even though enormous number of people are using it every hour. As a developer, I was always curious how Netflix is managing so many request over the world without any breakdown.

And that curiosity led me to Chaos Monkey. Yes, it is the answer. You don’t believe me. Please go ahead and read the blog, it will clear out your doubts. Before going onto the topic, I will first go through what it is actually based on.

First and the foremost, Resiliency tool — Resiliency tool tests how an application behaves under stress. In simple words, a tools that ensures applications perform well in real-life conditions. As per the term, resilience in software describes its ability to withstand stress and other challenging factors to continue performing its core functions and avoid loss of data.

Secondly, Chaos Engineering — Chaos Engineering is the discipline of experimenting on a software system in production in order to build confidence in the system’s capability to withstand turbulent and unexpected conditions. Source: wiki.

Chaos engineering is a way to ensure resilience requirement of softwares. At peak times, a software’s potential to endure failure while still ensuring proper quality of service without any failures and keep the system going. Chaos Engineering is a disciplined approach to identifying failures before they become outages. Chaos engineering provides resilience for three main failures : Infrastructure, Network and Application failures. The chaos engineering mainly focus on:

  1. Increase flexibility of development and velocity of deployment
  2. Systems-based approach addresses the chaos in distributed systems at scale and builds confidence in the ability of those systems to withstand realistic conditions.
  3. Facilitation of experiments to uncover weaknesses.

Now, after getting a short glimpse of what chaos monkey is based on. We will continue on our main topic i.e Chaos Monkey.

By definition, Chaos Monkey is responsible for randomly terminating instances in production to ensure that engineers implement their services to be resilient to instance failures. source: Netflix.

In simple terms, what chaos monkey do is, it terminates the instances running in production and check if the running services are resilience enough to keep the system going.

How Chaos Monkey come to existence:

In the year 2010, Netflix thought of moving their systems to the cloud. In the cloud environment, hosts could be terminated and replaced at any time, which meant their services needed to prepare for this constraint. For Netflix, moving on cloud platform meant rebooting their own hosts, so they could suss out any weaknesses/failures and validate the automated process worked correctly and the system is up and running perfectly.

In the starting phase, Netflix was dependent on Amazon Web Services(AWS) and needed a technology that could show them how their system responded when critical components of production service were taken down. Intentionally, causing this single failure would suss out any weaknesses in their systems and guide them towards automated solutions that gracefully handle future failures of this sort.

To make sure, the system is up without any issues, Netflix came up with the idea of Chaos Monkey which is indeed based on the principle of Chaos Engineering.

Chaos Monkey is a resiliency tool invented by Netflix in 2011 to test the resilience of its infrastructure. It works by intentionally disabling pseudo-random computers in Netflix’s production network to test how remaining systems respond to the outage. Exposing engineers to failures more, frequently training them to build resilient services.

Chaos Monkey is not the only one in the savior list. There is whole suite of Chaos tools has been developed to simulate outages and test system response times, which in turn known as Simian Army, which is another topic to discuss on. And there my friend, that’s how Netflix is so amazing and how beautifully it is handling so many request without any issues.

Design a site like this with WordPress.com
Get started