You are reading the article Photoshop Layer Mask Basics For Beginners updated in February 2024 on the website Tai-facebook.edu.vn. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Photoshop Layer Mask Basics For Beginners
So, what exactly is a layer mask and what does it do? Quite simply, a layer mask is something we can add to a layer that allows us to control the transparency of that layer. Of course, there are other ways in Photoshop to control a layer’s transparency as well. The Opacity option in the Layers panel is one way to adjust transparency. The Eraser Tool is another common way to add transparency to a layer. So what makes layer masks so special?
While the Opacity option in the Layers panel does allow us to control a layer’s transparency, it’s limited by the fact that it can only adjust transparency for the entire layer as a whole. Lower the Opacity value down to 50% and the entire layer becomes 50% transparent. Lower it to 0% and the entire layer is completely hidden from view.
That may be fine in some situations. But what if you need only part of a layer to be transparent? What if, say, you want the left side of a layer to be 100% transparent (completely hidden) and the right side to be 100% visible, with a smooth transition between them in the middle? What I’ve just described is a very common technique in Photoshop, allowing us to fade one image into another. But since we’d need to adjust the transparency level of different areas of the layer separately, and the Opacity option can only affect the entire layer as a whole, this simple effect is beyond what the Opacity option can do.The Layer Opacity Option
The Opacity option is found in the upper right of the Layers panel. By default, it’s set to 100% which means that the layer is fully visible in the document. Let’s lower it down to 70%:
Here, we see the result. Lowering the opacity of my “Cat” layer causes the cat image to appear faded in the document, allowing the dog image below it (as well as the checkerboard pattern to the right of the dog image) to partially show through. Yet because the Opacity option affects the entire layer as a whole, the entire cat image appears faded. What I wanted was a smooth transition from one image to another, but all I got was the bottom layer showing through the top layer:
If we lower the Opacity value all the way down to 0%:
All we end up doing is hiding the top layer completely. Again, it’s because the Opacity value affects the entire layer as a whole. There’s no way to adjust different parts of the layer separately:
Since the Opacity option is not going to give us the result we’re looking for, let’s set it back to 100%:
This brings the top image back into view and returns us to where we started:
Layer Opacity vs Fill in PhotoshopThe Eraser Tool
Now that we’ve looked at the Opacity option, let’s see if Photoshop’s Eraser Tool can give us better results. Unlike the Opacity option which affects the entire layer at once, Photoshop’s Eraser Tool can easily adjust the transparency of different parts of a layer separately. That’s because the Eraser Tool is nothing more than a brush, and to use it, we just drag the brush over any areas we want to remove.
Since the Eraser Tool is so simple and intuitive (everyone knows what an eraser is), it’s usually one of the first tools we turn to when learning Photoshop. And that’s unfortunate, because the Eraser Tool has one serious drawback. As its name implies, the Eraser Tool works by erasing (deleting) pixels in the image. And once those pixels are gone, there’s no way to get them back.
This is known as a destructive edit in Photoshop because it makes a permanent change to the original image. If, later on, we need to restore some of the area we erased with the Eraser Tool, there’s no easy way to do it. Often, our only option at that point would be to re-open the original image (assuming you still have it) and start the work all over again.Saving Our Work
Let’s look at the Eraser Tool in action. But before we do, we’ll quickly save our document. That way, when we’re done with the Eraser Tool, we’ll be able to easily return to our document’s original state. To save it, go up to the File menu at the top of the screen and choose Save As:
Now that we’ve saved the document, I’ll select the Eraser Tool from the Toolbar. I could also select it by pressing the letter E on my keyboard:
I’ll continue erasing more of the cat image to blend it in with the dog image, and here’s the result. As we see, the Eraser Tool made it easy to blend the two photos together:
This allows us to see just my cat image in the document, and look what’s happened. All of the areas I dragged over with the Eraser Tool are now gone. The checkerboard pattern in their place tells us that those parts of the image are now blank. If, later on, I realize that I erased too much of the cat image and need to bring some of it back, I’d be out of luck. Once those pixels have been deleted, they’re gone for good:
Of course, at the moment, I could probably just undo my brush strokes to restore the areas I deleted. But that won’t always be the case. Photoshop gives us only a limited number of undo’s, so if I had done more work on the document after erasing the pixels, I may not be able to go back far enough in my document’s history to undo it. Also, once we close out of the document, we lose our file history, which means that the next time we open the document to continue working, Photoshop would have no record of our previous steps and no way to undo them.Restoring The Image
Fortunately, in this case, we planned ahead and saved our document before using the Eraser Tool. To revert the document back to the way it looked before we erased any pixels, all we need to do is go up to the File menu at the top of the screen and choose Revert:
This returns the document back to the way it looked the last time we saved it, restoring the pixels in the top image:
Still scrolling? Download this tutorial as a PDF!Adding A Layer Mask
So far, we’ve seen that the Opacity option in the Layers panel can only affect entire layers at once, and that the Eraser Tool causes permanent damage to an image. Let’s see if a layer mask can give us better results.
We want to blend the top image in with the layer below it, which means that we’ll need to hide some of the top layer to let the bottom layer show through. The first thing we’ll need to do, then, is select the top layer in the Layers panel (if it isn’t selected already):
Nothing will happen to the images in the document, but if we look again in the Layers panel, we see that the top layer now shows a layer mask thumbnail to the right of its preview thumbnail:As Easy As Black And White (And Gray)
Notice that the layer mask thumbnail is filled with white. Why white? Why not black, or red, or blue? Well, the reason it’s not filled with red or blue is because layer masks are grayscale images. A grayscale image is an image that uses only black, white and the various shades of gray in between. It can’t display any other colors.
Many people think of grayscale images as black and white images. But really, most black and white photos are actually grayscale photos, not black and white, since a true “black and white” photo would contain only pure black and pure white, with no other shades of gray, and that would make for a pretty odd looking image.
So, since layer masks are grayscale images, that explains why the layer mask isn’t filled with red or blue. But why white? Why not black or gray? Well, we use a layer mask to control the transparency level of a layer. Usually, we use it to adjust the transparency of different areas of the layer independently (otherwise we’d just use the Opacity option in the Layers panel that we looked at earlier).
But by default, when we first add a layer mask, Photoshop keeps the entire layer fully visible. It does that by filling the layer mask with white. Why white? It’s because the way a layer mask works is that it uses white to represent the areas of the layer that should remain 100% visible in the document. It uses black to represent areas that should be 100% transparent (completely hidden). And, it uses the various shades of gray in between to represent partial transparency, with areas filled with darker shades of gray appearing more transparent than areas filled with lighter shades.
In other words, with layer masks, we use white to show the contents of the layer, black to hide them, and gray to partially show or hide them. And that’s really all there is to it!
Since my layer mask is currently filled with white, and white on a layer mask represents areas on the layer that are 100% visible, my entire image on the “Cat” layer is fully visible in the document:
Then, to fill the layer mask with black, go up to the Edit menu at the top of the screen and choose Fill:
Back in the Layers panel, we see that the layer mask thumbnail is now filled with solid black:
Since black on a layer mask represents areas on the layer that are 100% transparent, filling the entire layer mask with black causes the contents of the layer (my cat photo) to be completely hidden from view. This gives us the same result as if we had lowered the Opacity option in the Layers panel down to 0%:
What if we fill the layer mask with gray? Let’s give it a try. I’ll go back up to the Edit menu and I’ll once again choose Fill:
Back in the Layers panel, we see that my layer mask thumbnail is now filled with 50% gray (the shade of gray directly between pure black and pure white):
Since gray on a layer mask represents areas of partial transparency on the layer, and we filled the mask specifically with 50% gray, my cat photo now appears 50% transparent in the document, giving us the same result as if we had lowered the Opacity option to 50%:
Let’s restore the image back to 100% visibility by again going up to the Edit menu and choosing Fill:
This fills our layer mask with white, just like it was originally:
And the image on the layer is once again 100% visible:Destructive vs Non-Destructive Editing
So far, layer masks haven’t seemed like anything special. In fact, as we’ve seen, filling a layer mask entirely with solid white, black or gray gives us the same result as using the Opacity option in the Layers panel. If that was all that layer masks could do, there would be no need for layer masks since the Opacity option is faster and easier to use.
But layer masks in Photoshop are a lot more powerful than that. In fact, they have more in common with the Eraser Tool than with the Opacity option. Like the Eraser Tool, layer masks allow us to easily show and hide different areas of a layer independently.
But here’s the important difference. While the Eraser Tool permanently deletes areas of an image, layer masks simply hide those areas from view. In other words, the Eraser Tool makes destructive edits to an image; layer masks do it non-destructively. Let’s see how it works.
First, let’s make sure once again that our layer mask, not the layer itself, is selected. You should be seeing the white highlight border around the mask thumbnail:The Brush Tool
I mentioned earlier that the Eraser Tool is a brush. With layer masks, we don’t use the Eraser Tool itself, but we do use a brush. In fact, we use Photoshop’s Brush Tool. I’ll select it from the Toolbar. You can also select the Brush Tool by pressing the letter B on your keyboard:
Since we want to use the Brush Tool to hide areas of the layer we paint over, and we know that on a layer mask, black represents areas that are hidden, we’ll need to paint with black. Photoshop uses our current Foreground color as the brush color. But by default, whenever we have a layer mask selected, Photoshop sets the Foreground color to white, not black.
We can see our current Foreground and Background colors in the color swatches near the bottom of the Toolbar. Notice that the Foreground color (the swatch in the upper left) is set to white and that the Background color (the swatch in the lower right) is set to black. These are the default colors when working with layer masks:
To set our Foreground color to black, all we need to do is swap the current Foreground and Background colors, and the easiest way to do that is by pressing the letter X on your keyboard. This sets the Foreground color, and our brush color, to black:Painting With Black To Hide Areas
Then, with black as my brush color, I’ll start painting over roughly the same areas that I did with the Eraser Tool. Because I’m painting on a layer mask, not on the layer itself, we don’t see the brush color as we paint. Instead, since I’m painting with black, and black hides areas on a layer mask, the areas I paint over are hidden from view:
I’ll continue hiding more of the cat image by painting over more areas with black until I get a result similar to what I achieved with the Eraser Tool:
At this point, the difference between a layer mask and the Eraser Tool isn’t all that obvious. Both of them allowed me to blend my two images together by hiding parts of the top layer, and both gave me similar results. Yet as we saw earlier, the Eraser Tool permanently deleted the areas I erased. Let’s look more closely at what’s happened with the layer mask.
First, let’s look again at our layer mask thumbnail in the Layers panel where we see that it’s no longer filled with just solid white. Some of it remains white, but we can also see the areas where we painted on it with black:Viewing The Layer Mask
It’s important to understand that the layer mask thumbnail in the Layers panel is not the actual layer mask itself. The thumbnail is there simply to give us a way to select the layer mask so we can work on it, and to show us a small preview of what the full size layer mask looks like.
This temporarily hides our image and replaces it with the layer mask, giving us a better view of what we’ve done. In my case, the white area on the right is where my cat photo remains 100% visible. The areas I painted over with black are the areas where my cat image is now 100% transparent, allowing the dog photo below the layer to show through.
And, because I painted with a soft-edge brush, we see a feathering effect around the black areas, creating narrow gradients that transition smoothly from black to white. Since we know that gray on a layer mask creates partial transparency, and darker shades of gray appear more transparent than lighter shades, those dark-to-light gradients between the black (100% transparent) and white (100% visible) areas allow my two images to transition smoothly together:
And now, we’re back to seeing our images:Turning The Layer Mask Off
With the layer mask turned off, we’re no longer seeing its effects in the document, and this is where the difference between the Eraser Tool and a layer mask becomes obvious. Remember, the Eraser Tool permanently deleted areas of the image. Yet as we see, the layer mask did not. All the layer mask did was hide those areas from view. When we turn the mask off, the entire image on the layer returns:Painting With White To Restore Hidden Areas
Since a layer mask simply hides, rather than deletes, areas on a layer, and our original image is still there, it’s easy to bring back any areas that were previously hidden. We know that white on a layer mask makes those areas 100% visible, so all we need to do is paint over any areas we want to restore with white.
To change your brush color from black to white, press the letter X on your keyboard to swap your Foreground and Background colors back to their defaults. This sets your Foreground color (and your brush color) to white:
Then, with the layer mask still selected and white as your brush color, simply paint over any areas that were previously hidden to make them visible. In my case, I’ll paint over the dog’s paw in the bottom center to hide it and show the cat image in its place:
With the layer mask itself now visible, we see how easy it was to restore the top image in that area. Even though I had previously painted over it with black to hide the cat photo from view, all I had to do to restore it was paint over that same area with white:
You're reading Photoshop Layer Mask Basics For Beginners
What is Deep Learning?
Deep Learning is a computer software that mimics the network of neurons in a brain. It is a subset of machine learning based on artificial neural networks with representation learning. It is called deep learning because it makes use of deep neural networks. This learning can be supervised, semi-supervised or unsupervised.
Deep learning algorithms are constructed with connected layers.
The first layer is called the Input Layer
The last layer is called the Output Layer
All layers in between are called Hidden Layers. The word deep means the network join neurons in more than two layers.
What is Deep Learning?
Each Hidden layer is composed of neurons. The neurons are connected to each other. The neuron will process and then propagate the input signal it receives the layer above it. The strength of the signal given the neuron in the next layer depends on the weight, bias and activation function.
The network consumes large amounts of input data and operates them through multiple layers; the network can learn increasingly complex features of the data at each layer.
In this Deep learning tutorial for beginners, you will learn Deep learning basics like-
Deep learning Process
A deep neural network provides state-of-the-art accuracy in many tasks, from object detection to speech recognition. They can learn automatically, without predefined knowledge explicitly coded by the programmers.
Deep learning Process
To grasp the idea of deep learning, imagine a family, with an infant and parents. The toddler points objects with his little finger and always says the word ‘cat.’ As his parents are concerned about his education, they keep telling him ‘Yes, that is a cat’ or ‘No, that is not a cat.’ The infant persists in pointing objects but becomes more accurate with ‘cats.’ The little kid, deep down, does not know why he can say it is a cat or not. He has just learned how to hierarchies complex features coming up with a cat by looking at the pet overall and continue to focus on details such as the tails or the nose before to make up his mind.
A neural network works quite the same. Each layer represents a deeper level of knowledge, i.e., the hierarchy of knowledge. A neural network with four layers will learn more complex feature than with two layers.
The learning occurs in two phases:
Second Phase: The second phase aims at improving the model with a mathematical method known as derivative.
The neural network repeats these two phases hundreds to thousands of times until it has reached a tolerable level of accuracy. The repeat of this two-phase is called an iteration.
To give a Deep learning example, take a look at the motion below, the model is trying to learn how to dance. After 10 minutes of training, the model does not know how to dance, and it looks like a scribble.
After 48 hours of learning, the computer masters the art of dancing.Classification of Neural Networks
Shallow neural network: The Shallow neural network has only one hidden layer between the input and output.
Deep neural network: Deep neural networks have more than one layer. For instance, Google LeNet model for image recognition counts 22 layers.
Nowadays, deep learning is used in many ways like a driverless car, mobile phone, Google Search Engine, Fraud detection, TV, and so on.Types of Deep Learning Networks
Now in this Deep Neural network tutorial, we will learn about types of Deep Learning Networks:
Types of Deep Learning NetworksFeed-forward neural networks
The simplest type of artificial neural network. With this type of architecture, information flows in only one direction, forward. It means, the information’s flows starts at the input layer, goes to the “hidden” layers, and end at the output layer. The network
does not have a loop. Information stops at the output layers.Recurrent neural networks (RNNs)
RNN is a multi-layered neural network that can store information in context nodes, allowing it to learn data sequences and output a number or another sequence. In simple words, it is an Artificial neural networks whose connections between neurons include loops. RNNs are well suited for processing sequences of inputs.
Recurrent neural networks
For Example, if the task is to predict the next word in the sentence “Do you want a…………?
The RNN neurons will receive a signal that point to the start of the sentence.
The network receives the word “Do” as an input and produces a vector of the number. This vector is fed back to the neuron to provide a memory to the network. This stage helps the network to remember it received “Do” and it received it in the first position.
The network will similarly proceed to the next words. It takes the word “you” and “want.” The state of the neurons is updated upon receiving each word.
The final stage occurs after receiving the word “a.” The neural network will provide a probability for each English word that can be used to complete the sentence. A well-trained RNN probably assigns a high probability to “café,” “drink,” “burger,” etc.
Common uses of RNN
Help securities traders to generate analytic reports
Detect abnormalities in the contract of financial statement
Detect fraudulent credit-card transaction
Provide a caption for images
The standard uses of RNN occur when the practitioners are working with time-series data or sequences (e.g., audio recordings or text).Convolutional neural networks (CNN)
CNN is a multi-layered neural network with a unique architecture designed to extract increasingly complex features of the data at each layer to determine the output. CNN’s are well suited for perceptual tasks.
Convolutional Neural Network
CNN is mostly used when there is an unstructured data set (e.g., images) and the practitioners need to extract information from it.
For instance, if the task is to predict an image caption:
The CNN receives an image of let’s say a cat, this image, in computer term, is a collection of the pixel. Generally, one layer for the greyscale picture and three layers for a color picture.
During the feature learning (i.e., hidden layers), the network will identify unique features, for instance, the tail of the cat, the ear, etc.
When the network thoroughly learned how to recognize a picture, it can provide a probability for each image it knows. The label with the highest probability will become the prediction of the network.Reinforcement Learning
Reinforcement learning is a subfield of machine learning in which systems are trained by receiving virtual “rewards” or “punishments,” essentially learning by trial and error. Google’s DeepMind has used reinforcement learning to beat a human champion in the Go games. Reinforcement learning is also used in video games to improve the gaming experience by providing smarter bots.
One of the most famous algorithms are:
Deep Q network
Deep Deterministic Policy Gradient (DDPG)Examples of deep learning applications
Now in this Deep learning for beginners tutorial, let’s learn about Deep Learning applications:AI in Finance:
The financial technology sector has already started using AI to save time, reduce costs, and add value. Deep learning is changing the lending industry by using more robust credit scoring. Credit decision-makers can use AI for robust credit lending applications to achieve faster, more accurate risk assessment, using machine intelligence to factor in the character and capacity of applicants.
Underwrite is a Fintech company providing an AI solution for credit makers companies. chúng tôi uses AI to detect which applicant is more likely to pay back a loan. Their approach radically outperforms traditional methods.AI in HR:
Under Armour, a sportswear company revolutionizes hiring and modernizes the candidate experience with the help of AI. In fact, Under Armour Reduces hiring time for its retail stores by 35%. Under Armour faced a growing popularity interest back in 2012. They had, on average, 30000 resumes a month. Reading all of those applications and begin to start the screening and interview process was taking too long. The lengthy process to get people hired and on-boarded impacted Under Armour’s ability to have their retail stores fully staffed, ramped and ready to operate.
At that time, Under Armour had all of the ‘must have’ HR technology in place such as transactional solutions for sourcing, applying, tracking and onboarding but those tools weren’t useful enough. Under armour choose HireVue, an AI provider for HR solution, for both on-demand and live interviews. The results were bluffing; they managed to decrease by 35% the time to fill. In return, the hired higher quality staffs.AI in Marketing:
AI is a valuable tool for customer service management and personalization challenges. Improved speech recognition in call-center management and call routing as a result of the application of AI techniques allows a more seamless experience for customers.
For example, deep-learning analysis of audio allows systems to assess a customer’s emotional tone. If the customer is responding poorly to the AI chatbot, the system can be rerouted the conversation to real, human operators that take over the issue.
Apart from the three Deep learning examples above, AI is widely used in other sectors/industries.Why is Deep Learning Important?
Deep learning is a powerful tool to make prediction an actionable result. Deep learning excels in pattern discovery (unsupervised learning) and knowledge-based prediction. Big data is the fuel for deep learning. When both are combined, an organization can reap unprecedented results in term of productivity, sales, management, and innovation.
Deep learning can outperform traditional method. For instance, deep learning algorithms are 41% more accurate than machine learning algorithm in image classification, 27 % more accurate in facial recognition and 25% in voice recognition.Limitations of deep learning
Now in this Neural network tutorial, we will learn about limitations of Deep Learning:Data labeling
Most current AI models are trained through “supervised learning.” It means that humans must label and categorize the underlying data, which can be a sizable and error-prone chore. For example, companies developing self-driving-car technologies are hiring hundreds of people to manually annotate hours of video feeds from prototype vehicles to help train these systems.Obtain huge training datasets
It has been shown that simple deep learning techniques like CNN can, in some cases, imitate the knowledge of experts in medicine and other fields. The current wave of machine learning, however, requires training data sets that are not only labeled but also sufficiently broad and universal.
Deep-learning methods required thousands of observations for models to become relatively good at classification tasks and, in some cases, millions for them to perform at the level of humans. Without surprise, deep learning is famous in giant tech companies; they are using big data to accumulate petabytes of data. It allows them to create an impressive and highly accurate deep learning model.Explain a problem
Large and complex models can be hard to explain, in human terms. For instance, why a particular decision was obtained. It is one reason that acceptance of some AI tools are slow in application areas where interpretability is useful or indeed required.
Furthermore, as the application of AI expands, regulatory requirements could also drive the need for more explainable AI models.Summary
Deep Learning Overview: Deep learning is the new state-of-the-art for artificial intelligence. Deep learning architecture is composed of an input layer, hidden layers, and an output layer. The word deep means there are more than two fully connected layers.
There is a vast amount of neural networks, where each architecture is designed to perform a given task. For instance, CNN works very well with pictures, RNN provides impressive results with time series and text analysis.
Deep learning is now active in different fields, from finance to marketing, supply chain, and marketing. Big firms are the first one to use deep learning because they have already a large pool of data. Deep learning requires to have an extensive training dataset.
According to industry estimates, only 21% of the available data is present in a structured form. Data is being generated as we speak, as we tweet, as we send messages on WhatsApp and in various other activities. The majority of this data exists in the textual form, which is highly unstructured in nature.
Despite having high dimension data, the information present in it is not directly accessible unless it is processed (read and understood) manually or analyzed by an automated system. In order to produce significant and actionable insights from text data, it is important to get acquainted with the basics of Natural Language Processing (NLP).
Note: If you are more interested in learning concepts in an Audio-Visual format, We have this entire article explained in the video below. If not, you may continue reading.
In this article, we will talk about the basics of different techniques related to Natural Language Processing.Table of Contents
What are Corpus, Tokens, and Engrams?
What is Tokenization?
What is White-space Tokenization?
What is Regular Expression Tokenization?
What is Normalization?
What is Stemming?
What is Lemmatization?
Part of Speech tags in NLP
Grammar in NLP and its types
What is Constituency Grammar?
What is Dependency Grammar?
Let’s Begin!What are Corpus, Tokens, and Engrams?
A Corpus is defined as a collection of text documents for example a data set containing news is a corpus or the tweets containing Twitter data is a corpus. So corpus consists of documents, documents comprise paragraphs, paragraphs comprise sentences and sentences comprise further smaller units which are called Tokens.
Tokens can be words, phrases, or Engrams, and Engrams are defined as the group of n words together.
For example, consider this given sentence-
“I love my phone.”
In this sentence, the uni-grams(n=1) are: I, love, my, phone
Di-grams(n=2) are: I love, love my, my phone
And tri-grams(n=3) are: I love my, love my phone
So, uni-grams are representing one word, di-grams are representing two words together and tri-grams are representing three words together.2. What is Tokenization?
Let’s discuss Tokenization now. Tokenization is a process of splitting a text object into smaller units which are also called tokens. Examples of tokens can be words, numbers, engrams, or even symbols. The most commonly used tokenization process is White-space Tokenization.2.1 What is White-space Tokenization?
Also known as unigram tokenization. In this process, the entire text is split into words by splitting them from white spaces.
For example, in a sentence- “I went to New-York to play football.”
This will be splitted into following tokens: “I”, “went”, “to”, “New-York”, “to”, “play”, “football.”
Notice that “New-York” is not split further because the tokenization process was based on whitespaces only.2.2 What is Regular Expression Tokenization?
The other type of tokenization process is Regular Expression Tokenization, in which a regular expression pattern is used to get the tokens. For example, consider the following string containing multiple delimiters such as comma, semi-colon, and white space.Sentence= “Football, Cricket; Golf Tennis" re.split(r’[;,s]’, Sentence
Tokens= “Football”, ”Cricket”, “Golf”, “Tennis”
Using Regular expression, we can split the text by passing a splitting pattern.
Tokenization can be performed at the sentence level or at the world level or even at the character level.3. What is Normalization?
The next technique is Normalization. In the field of linguistics and NLP, a Morpheme is defined as the base form of a word. A token is generally made up of two components, Morphemes, which are the base form of the word, and Inflectional forms, which are essentially the suffixes and prefixes added to morphemes.
For example, consider the word Antinationalist,
which is made up of Anti and ist as the inflectional forms and national as the morpheme. Normalization is the process of converting a token into its base form. In the normalization process, the inflection from a word is removed so that the base form can be obtained. So, the normalized form of anti-nationalist is national.
Normalization is useful in reducing the number of unique tokens present in the text, removing the variations of a word in the text, and removing redundant information too. Popular methods which are used for normalization are Stemming and Lemmatization.
Let’s discuss them in detail!3.1 What is Stemming?
Stemming is an elementary rule-based process for removing inflectional forms from a token and the outputs are the stem of the world.
For example, “laughing”, “laughed“, “laughs”, “laugh” will all become “laugh”, which is their stem, because their inflection form will be removed.
Stemming is not a good normalization process because sometimes stemming can produce words that are not in the dictionary. For example, consider a sentence: “His teams are not winning”
After stemming the tokens that we will get are- “hi”, “team”, “are”, “not”, “winn”
Notice that the keyword “winn” is not a regular word and “hi” changed the context of the entire sentence.
Another example could be-3.2 What is Lemmatization?
Lemmatization, on the other hand, is a systematic step-by-step process for removing inflection forms of a word. It makes use of vocabulary, word structure, part of speech tags, and grammar relations.
The output of lemmatization is the root word called a lemma. For example,
Also, since it is a systematic process while performing lemmatization one can specify the part of the speech tag for the desired term and lemmatization will only be performed if the given word has the proper part of the speech tag. For example, if we try to lemmatize the word running as a verb, it will be converted to run. But if we try to lemmatize the same word running as a noun it won’t be converted.
A detailed explanation of how Lemmatization works by the step-by-step process to remove inflection forms of a word-
Let us now look at some of the syntax and structure-related properties of text objects. We will be talking about the part of speech tags and grammar.4. Part of Speech(PoS) Tags in Natural Language Processing-
Part of speech tags or PoS tags is the properties of words that define their main context, their function, and the usage in a sentence. Some of the commonly used parts of speech tags are- Nouns, which define any object or entity; Verbs, which define some action; and Adjectives or Adverbs, which act as the modifiers, quantifiers, or intensifiers in any sentence. In a sentence, every word will be associated with a proper part of the speech tag, for example,
“David has purchased a new laptop from the Apple store.”
In the below sentence, every word is associated with a part of the speech tag which defines their functions.
In this case “David’ has NNP tag which means it is a proper noun, “has” and “purchased” belongs to verb indicating that they are the actions and “laptop” and “Apple store” are the nouns, “new” is the adjective whose role is to modify the context of laptop.
Part of speech tags is defined by the relations of words with the other words in the sentence. Machine learning models or rule-based models are applied to obtain the part of speech tags of a word. The most commonly used part of speech tagging notations is provided by the Penn Part of Speech Tagging.
Part of speech tags have a large number of applications and they are used in a variety of tasks such as text cleaning, feature engineering tasks, and word sense disambiguation. For example, consider these two sentences-
Sentence 1: “Please book my flight for NewYork”
Sentence 2: “I like to read a book on NewYork”
In both sentences, the keyword “book” is used but in sentence one, it is used as a verb while in sentence two it is used as a noun.5. Grammar in NLP and its types-
Now, let’s discuss grammar. Grammar refers to the rules for forming well-structured sentences. The first type of Grammar is Constituency grammar.5.1 What is Constituency Grammar?
Any word, group of words, or phrases can be termed as Constituents and the goal of constituency grammar is to organize any sentence into its constituents using their properties. These properties are generally driven by their part of speech tags, noun or verb phrase identification.
For example, constituency grammar can define that any sentence can be organized into three constituents- a subject, a context, and an object.
These constituents can take different values and accordingly can generate different sentences. For example, we have the following constituents-
Some of the examples of the sentences that can be generated using these constituents are-
“The dogs are barking in the park.”
“They are eating happily.”
“The cats are running since morning.”
Another view to look at constituency grammar is to define their grammar in terms of their part of speech tags. Say a grammar structure containing a [determiner, noun] [ adjective, verb] [preposition, determiner, noun] which corresponds to the same sentence- “The dogs are barking in the park.”5.2 What is Dependency Grammar?
A different type of grammar is Dependency Grammar which states that words of a sentence are dependent upon other words of the sentence. For example, in the previous sentence “barking dog” was mentioned and the dog was modified by barking as the dependency adjective modifier exists between the two.
Dependency grammar organizes the words of a sentence according to their dependencies. One of the words in a sentence acts as a root and all the other words are directly or indirectly linked to the root using their dependencies. These dependencies represent relationships among the words in a sentence and dependency grammars are used to infer the structure and semantics dependencies between the words.
Let’s consider an example. Consider the sentence:
“Analytics Vidhya is the largest community of data scientists and provides the best resources for understanding data and analytics.”
The dependency tree of this sentence looks something like this-
In this tree, the root word is “community” having NN as the part of speech tag and every other word of this tree is connected to root, directly or indirectly, with a dependency relation such as a direct object, direct subject, modifiers, etc.
These relationships define their roles and functions of each word in the sentence and how multiple words are connected together. Every dependency can be represented in the form of a triplet which contains a governor, a relation, and a dependent,
which means that a dependent is connected to the governor by relation, or in other words, they are subject, verb, and object respectively. For example, in the same sentence: “Analytics Vidhya is the largest community of data scientists”
“Analytics Vidhya” is the subject and is playing the role of a governor, the verb here is “is” and is playing the role of the relation, and “the largest community of data scientist” is the dependent or the object.
Dependency grammars can be used in different use case-
Named Entity Recognition– they are used to solve named entity recognition problems.
Question Answering System– they can be used to understand relational and structural aspects of question-answering systems.
Coreference Resolution– they are also used in coreference resolutions in which the task is to map the pronouns to the respective noun phrases.
Text summarization and Text classification– they can also be used for text summarization problems and they are also used as features for text classification problems.End Notes
In this article, we looked into the basics of Natural Language Processing.
NLP’s role in the modern world is skyrocketing. With the volume of unstructured data being produced, it is only efficient to master this skill or at least understand it to a level so that you as a data scientist can make some sense of it.
If you are interested in a full-fledged Natural Language Processing course covering everything from basics to extreme, here Analytics Vidhya’s Certified Natural Language Processing Master Program
Introduction to Android Apps Development for Beginners
The process of building and developing android apps development software and application programs for mobile phones and smart gadgets is increasingly becoming popular in today’s digital world. And it is important to know that Android apps development for beginners is now essential to the success of any product or service if you are running a company or business.
Start Your Free Software Development Course
Web development, programming languages, Software testing & othersImportance of Android Apps Development on Mobile
The scope of the android apps phone is growing by leaps and bounds every day and your business today depends on how smartly you exploit android app development. One of the important facts behind the use of the android apps development on mobile is that it provides flexibility and ease of having business with your clients. A quality standard mobile android apps development service can help you to stand in this competitive android apps market with a remarkable position. Android apps development on mobile has played an important role in making a phone into a smartphone.Tips to Choose the Appropriate for Android Apps Development on Mobile
In this age of the internet, you will not get a second chance, hence you will have to be very clear about the privacy or confidentiality of the phone applications that you want to be developed. You should hire android app developers to meet your needs, keeping the following tips in mind to ensure that you are hiring the right expert.
Choose developers that own and are familiar with a wide range of devices
Most of you will want to target common smartphones such as Blackberries, Android, or the iPhone for app development. Therefore you should choose mobile application developers that own and are familiar with these devices. It is very important to have an idea of what kind of devices you want your app to be compatible with, before looking for app developers.Go through the list of former clients and create apps of the particular app development company
An experienced app developer can give you a better experience and results. So it is necessary that you have a look at the apps created and the clients for whom it is created. Make sure that you ask for references and keep several questions in mind, when checking the apps.
Look for developers who design apps that fit a variety of mobile gadgets
When selecting mobile app developers, you should make sure that you choose one who is able to tailor your app in order to fit a variety of mobile gadgets like iPhones as well as Android.
Look for mobile app developers that offer extra services
A lot of the app developers also offer extra services like android apps security features and marketing services. Therefore, you should try to look for an app developer who can offer android apps development services beyond designing the apps.
The most important characteristic of an android apps development company is Trustworthiness. Before selecting a particular mobile app development company, you should first ascertain that the company is genuine and abides by the rules of secrecy. Most mobile app developers have immense talent and design skills and hiring a suitable mobile app developer can give you a perfect solution for your app requirement.Contract Mobile Apps Developer for effective Android Apps Development Solutions
People are familiar with the IT outsourcing business and this process works efficiently for large corporate of android apps development software companies. These outsourcing companies outsource some tasks such as development and other tasks to other companies. The concept of outsourcing is not new, and today many tasks are being outsourced by large corporations.
Currently, the trend of outsourcing Android apps development software is ruling the market. This has proven very beneficial both for corporate companies and businesses outsourcing android apps development on mobile. Is a better choice for the first because outsourcing the development of its Android app software task is to save time as well as money.
A good amount of capital is required to establish the whole team. By outsourcing this work to a good company for the development of mobile applications for android application development you can save your money and can invest in any other sector of the production of your business.
The development of mobile applications for Android is long. It also requires a lot of skills and techniques to perform this task. You can find all these things in one place and that is a good outsourcing company Android development. An android apps development on mobile applications has everything you are looking for.
Well, now that you know that outsourcing tasks developing applications for Android will be very beneficial for you, the next thing you need to do is find a good company to stand firm on your criteria. You only have to surf the Internet where you can find the number of the best development companies outsourcing that have several years of experience in the android apps development on mobile application development. These development companies provide contracting services to you Android Developers to make their task even easier.
To extend the functionality of your Smartphone, including the signing of the Android apps development services provider is required. Now, there are many existing mobile applications development companies on the android app market, but with the high-end mobile application for mobile to get the best services, feel a need to choose!
However, some of the companies set up a channel of dialogue between professionals and clients to regulate iPhone application developers, hire android apps development services. It better understands user needs and thus it helps to supplement. To outsource application development for Android are going to get high-quality android apps development services, be sure to choose the best outsourcing company.Steps to Keep into Consideration while Hiring an Android Apps Development Team
There are various platforms on which mobile applications are developed but the Android is getting great recognition and popularity in the market. Android is an open-source platform, which is powered by a renowned operating system. Since its launch, it has given a unique delight to its users and making more and more people rush for Android-powered smartphones.
If you have some effective android apps ideas regarding strong business growth, turn those ideas into a mobile application and preferably android, as you are new in this field. Android applications are attracting more and more Android app phone users around the world. And, one can find a lot of opportunities to achieve in the android development industry.
There is no limitation of android app development companies all over the world, but android apps development for beginners which companies are located in different places have gained a striking reputation in providing result-driven solutions to their clients. But among numerous companies, how will you find an ideal android apps development on a mobile company which will be able appropriate for all your requirements.These are some important steps to follow for finding an ideal team of professionals:
Get Initiated into a Thorough and Distinct Research
Look for the Portfolio and Clientele
You shouldn’t hand over the responsibility of giving your idea a prominent platform to someone without going through the complete details of what the professionals have done in the past and how they have handled the complex projects. This will give you a clear insight into the work procedure of your chosen android app development company.
Cross-Check Client’s Testimonial
When you search the available android app development companies, you are likely to get the client’s testimonials attached to it. But after getting it, you are required to cross-check the clients’ testimonials. This ensures that this gives beforehand information on how your experience will be after joining hands with such a professional team of individuals.
Thus, just exploring the sites and picking a professional android app development team is not enough. You are required to get indulged in some important steps to make you’re hiring a significant one.Hiring a Sophisticated yet Inexpensive Android Apps Development Company
Mobile devices have taken the world by storm. Especially the business sector is heavily dependent on it. People are on the move and the world of technology is trying to keep pace with it by creating devices that suit the requirements of time and age. Today websites are made responsive so that these can be opened in all types of devices like smartphones, tablets, laptops, and many more. When it comes to mobile devices android based mobile devices are the most popular and widely used. For this particular reason, Android apps development applications are most in demand.
There are many experienced and knowledgeable app developers in the majority city who are highly capable of handling complex projects.
These developers do not only have technical knowledge but they are highly well-informed about the android application market which helps them to develop market-friendly apps.
Before employing an android app development company their technological skills, experience and credibility should be evaluated and there are certain things that need to consider are as follows:
The android app development company must have the ability to ship the final product within the stipulated time. Missing deadlines is just not acceptable.
A portfolio of the app development company’s past work is very important. This can provide potential clients with the opportunity to have a look at the brands they have worked with and the kind of problems they have solved.
Ensure that the company employs experienced developers to work on the project and do not outsource it to small players.
Go through all the available client testimonials and this will offer you a good idea about the level of client satisfaction.
If you are a novice in the field of app development then try to hire an android application Development Company through reference and do not beat around the bush.
Realistically finalizing an android app development company for hire can be challenging but the above points can be very helpful.Recommended Articles
This has been a basic guide to android apps development for beginners. Here we have discussed the basic concept, importance, some important steps to follow for finding an ideal team of professionals. You may look at the following articles to learn more –
This article was published as a part of the Data Science Blogathon.Introduction
machines in docker.
When we run and deploy any application or a machine learning model or application or a database or some third parties package they need some libraries and dependencies, Working on multiple applications requires different versions of dependencies can lead to a dependencies conflict.
This problem could be solved either by using separate machines or by packaging all the dependencies with their application.What is Docker?
Docker is a containerization and manager tool, Docker means Develop, Ship, and Run anywhere no matter what operating system we are using and the environment. Docker is a kind of isolated space where an application runs by using system resources.
Note: In this article, I have used the word Capsule for the word Container. Containers are often called Capsules.F ker
Docker is a full package solution for an app migration. It comes with the following features:
Containers are light and flexible compared to running an application on virtual machines. It doesn’t depend on the version of the Operating System
It can be easily deployed to the cloud server or local servers with ease.
easily scalable. Supports Scaling with ease.Docker Components
Docker is ma ents, these components are responsible for their own tasks from building, and running to creating capsules.
Docker Engine ( Used for building and creating docker containers).
Docker Hub ( This is a place where you can host your docker images).
Docker Compose ( It defines applications )Docker Architecture
Docker Architecture consists of the Units that the docker is made up of these units are responsible for their own pre-defined tasks.
Docker CLI ( Command Line Interface)
Rest API ( connects docker daemon)
Docker Daemon ( Responsible for Objects, Images, Containers, Engines, etc.)
After 2024 docker was redesigned according to standardization and it must support Runtime Specification and Image Specification.
Runtime Specification defines the lifecycle of the capsule technology.
Earlier Docker Daemon was responsible for all the processes in docker but after the Standardization runc (Container Runtime) is responsible for spinning the Image. In Container orchestration tools like Kubernetes, we only need to install a container runtime to spin a container on its pod.
shim runc + shim makes docker daemon less.Shim monitors the running capsules.Docker Setup
The complexity of docker installation depends on the operating system. Use this link for installing docker in your system.
For this article, I’ll be using docker on my Linux machine.
After installing verify if docker is installed by typing the code$ docker --version Managing Docker as a Service
Getting the Status of Docker-Engine if it’s up or not.$ systemctl status docker
As you see our docker engine is up and active. It can be easily stopped by the command# stops the docker engine $ systemctl stop docker #Starts the docker engine $ systemctl start docker
Starting the docker in debug mode if the docker service is facing errors.$ sudo dockerd --debug Docker CLI & Deploy Your first Container
A docker image contains all the dependencies, source files, and tools needed to run an application. A Docker image is a source file that creates a capsule.
A capsule is a running instance created by a docker image.Docker Hello-world
Docker hello-world is an image file that makes a docker capsule and that prints hello world.docker run hello-world
It will first check the hello-world image file in its local registry. If not found, it pulls the image file from the docker hub ( default public registry) and runs the capsule.Printing All the Containers
List only already running capsulesdocker ps
List down all the docker-capsulesdocker ps -a
The highlighted part with the yellow colour is the Capsule ID/Container ID, and whenever we want to select a specific capsule, we use its Container ID.
Note: We don’t need to write the whole container-ID, if the container-ID is unique we can only provide its first few starting characters.docker rm 70
This command will remove the Capsules whose container ID starts with 70.List Down All the Images
This lists all the docker images available in our local registry.docker images Deleting an Image
If you want to delete an image from our local repository, we need to execute the following commands.docker image rm ID_OF_CONTAINER docker image rm Image_Name Pulling Image from Docker Hub
Any Image file available on the docker hub or from other sources can be pulled in the local docker registry using the pull command.
The command docker pull only download the image file; it won’t make any capsule using the image file unless we ask docker to do that.Docker Inspect
The Docker inspect is a powerful command that lets us examine a container’s information, exposed ports, and network properties.
Inspect command can be used to inspect an image file or a capsule.#Inspecting docker image docker image inspect IMAGE_ID # Docker Container Inspect docker container inspect ID_OF_CONTAINER docker inspect ID_OF_CONTAINER Running Ubuntu in Docker
To run a Linux as a docker capsule, we first need to pull the ubuntu image file.docker pull ubuntu docker images
Now we have an ubuntu image file in our local registry. It’s time to spin the capsule using the image file.docker run -it ubuntu
Here -it signifies that we want our capsule to run in interactive mode. It will wait for our r
We can run the Linux commands now to check if our Linux capsule is up or not.
Ubuntu uses the command “/bin/bash” to enter the capsule. We can use the terminal of ubuntu in our spinning ubuntu capsule using the command “bin/bash”.
docker exec This command runs a new command in already spinning capsules.Stopping the Running Capsules
We need its id to stop a Running capsuleˆdocker stop ID_OF_CONTAINER Conclusion
In this article, we learned some basics codes of docker to manage docker services and performed some Linux-based docker commands.
We discussed commands for managing capsules, docker-images, and the docker engine pulling Images from the hub to the local repository.
We don’t need to install the docker to run its capsule, and we only need docker runtime to run a deployment.
Docker images can be created from the project files.
Linux natively supports docker.
Docker became daemon-less since a capsule no longer needs docker daemon to run. Docker provides us with storage layers where the capsules keep their files, and even if the tablet crashes, the stored files won’t be deleted.
Feel Free to Connect with me on LinkedIn.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Welcome to our Google Optimize tutorial!
Today, we’ll be discussing how to A/B test a website with the free experimentation tool from Google.
This is a practical read but not a short one. If you prefer to jump into action then I strongly recommend at least learning what we detailed in when you should optimize.
What we’ll cover in more detail:
Let’s get rolling.What Is Google Optimize?
Google Optimize is a free tool from Google that allows you to improve the experiences of users on your website. The platform allows you to present to your audience different versions of specific pages of your website and test the most effective one.
Google Optimize uses three different types of tests: A/B Testing, Multivariate Testing and Redirect tests.
For enterprise-level needs, there is a premium version called Google Optimize 360.What Is A/B Testing in Google Optimize?
A/B testing is when visitors are shown two (or more) versions of a page to identify which one brings in more of the results you are looking for.
The main page is called the original (or control) and the modified version of that page is the variant. When you A/B test, you show both pages to different groups of people (randomly chosen) to see which one performs better.
The purpose of this test is to solve a user pain point or a problem such as a decrease in traffic or revenue. In general, these changes are not very big. They typically are limited to testing elements. This can be as simple as changing a button color or a headline, etc.Why Should I Use Google Optimize?
We’ll cover the three most common reasons to use Google Optimize – integration, cost, and ease of usage.
First, Optimize is well-integrated. That’s because it’s one of the solutions of the Google Marketing Platform, which provides a suite of tools like Google Analytics for businesses all in one spot.
This means that Google Optimize can be linked to Google Analytics, Google Tag Manager, and Google Ads. It removes the need to switch between scattered platforms to get the marketing job done.
Follow these links to link your Google Analytics and Google Ads accounts to Google Optimize:
Second, it is free to use. Although the upgraded version (Google Optimize 360) provides additional capabilities, the free version is more than sufficient for most companies.
When GA4 replaced Universal Analytics, the change came with integrations, customizations, and additional features which used to be only available to Optimize 360 users.
Although Optimize 360 and other platforms may be more suitable for large corporations, the free version still competes well above average compared to many other paid testing solutions.
Third, it is easy to use. You don’t need to know how to code and therefore don’t need to rely on a developer for your tweaks.When Should I Use Google Optimize?
There are plenty of helpful tutorials online to get you up and running, as you don’t need much to create your first experience.
What this leaves out, though, are other prerequisites that go beyond installing the right Optimize snippet to your website. You’ll quickly realize that not everyone is ready for split testing.
The main concern relates to low-traffic websites, which refer to any site with less than 4,000 visitors per month.
However, we’ll show you a workaround if your traffic is low.Marketing Forecasts
What are your funnels and how close are you to your numbers? Can you confidently tell the conversion rates for each of your funnel steps for the next 1-3 months?
These questions are relevant because measuring behaviors indicates quantitatively where problems and opportunities lie. Not only that, but they also allow you to guide or come up with a hypothesis for your experiments.
Speaking of funnels, they don’t have to be complicated.
A simple 3 step funnel including micro goals and macro goals can go a long way for businesses. Such a funnel can be easily set up via Google Analytics goals or by enabling conversions in GA4.
A concrete example could be turning these steps into GA goals:
After weeks of knowing what conversions you hit on average, you’ll be able to quickly identify bottlenecks and optimization opportunities.
You should master predicting your conversion rates before jumping into split testing. You can optimize your websites for years this way without A/B testing.
Why am I mentioning forecasting?
💡 Top Tip: Low-traffic websites can still leverage A/B testing by testing macro conversions. Figure out the funnel steps that you will enable as goals or conversions in Google Analytics.Number of Traffic and Conversions
Many people often ask how much traffic is needed to run A/B testing.
Marketers do not have a conclusive answer. However, there are ranges and other factors to be taken into account for your test to be reliable or statistically significant.
You will need approximately more than 10,000 monthly visits to your site minimum. Additionally, you’ll need a minimum of 100 to 500 conversions per month.
Still, there’s more to consider. That’s why calculators like this one solve these issues.
But again, the pro tip we shared with you about tracking your funnel steps in the previous section can help you obtain results if your site has low traffic.
We’ll also provide you with detailed solutions in the analysis section at the end of this post.How to install Google Optimize?
To start, we’ll create a Google Optimize account and container that will be linked to Google Analytics. Afterward, we will install the Optimize code snippet either manually or through Google Tag Manager.
We’ll show you both and explain which method to use for your situation.
Choose your account settings and remember to always get permission from your client or company before checking mark Benchmarking.
You will then select your preferences for emails sent to you related to Google Optimize.
Your account and container are now created. You’ll land on a page inviting you to create your first experience.
A slide-in popup will appear with your container settings details. Here you can find four items:
Your Google Optimize container name and ID
The linking feature to your Google Analytics property
The Optimize code snippet to install on your website
The Chrome extension tool required to use visual editor (more on this later)
First, name your container.
Simply select the edit button and give it a meaningful name.
Containers work the same as those in Google Tag Manager. Therefore, if you have multiple websites, you can create a new container for each.
A popup will slide on your screen. There you can choose a property to link to Optimize.
Google Optimize provides different options and limits for each version. You can take a look at it here.
Great! You’ve connected your Google Analytics property. We’ll now automatically be brought back to our Containers settings in the first popup to continue our setup.
Now it’s time to install Optimize to your website. We’re now in the Setup instructions section where you’ll find your Google Optimize snippet.
There are two ways to install Optimize:
hardcode one of the Optimize snippets in your website
use Google Tag Manager.
Note that the first method is considered best practice. However, there may be times when this method isn’t possible. This infographic explains which method you should use in each situation.
You may wonder what to pick between the synchronous and asynchronous snippets. To simplify things, 90% of the time you’ll use the synchronous snippet (optimize.js).
In addition, this snippet is recommended for most users in the Optimize Resource Hub. This is the same snippet that you’ll find by default in your Container settings under Setup instructions.
If you work with a client, you must read the documentation to see if the other snippet (asynchronous) may be a better option.
Regardless of which code snippet you choose, the location where you’ll hardcode it in the source code of your website matters.
Some website builders like Squarespace allow you to place the code directly in the head via a code injection feature.
In the Container settings, the last piece of the Setup instructions is the Install the Chrome extension. You need this extension to create experiments and to use the Chrome browser.Snippet Placement Exceptions
🚨 Note: There are exceptions when it comes to the snippet placement. For example, if you have a Data Layer script on your site, then the Optimize snippet must come after the Data Layer script.
Any of the following must be positioned before the Optimize snippet:
The Data Layer
Not only this is bad when it comes to user experience, but it can also cost you money since users may sometimes have to wait for the page to load or be presented with a page where conversions may not be successful.
If you decide to use this snippet, copy the code here. Don’t forget to replace the container ID with yours. This applies also if you deploy it with Google Tag Manager.
Let’s visually see the best order for scripts/snippets placement.How to Install Google Optimize With Google Tag Manager
Let’s go over the steps to install Optimize with Google Tag Manager.
Back in our Container settings, copy the Optimize ID.
In Google Tag Manager, go to Add a new tag.
Now follow these steps: Tag Configuration → Choose tag type → Google Optimize
Paste your Optimize container ID in the space under Optimize Container ID, and in the next section below in Triggering, choose the All Pages trigger.
Save your tag and let’s test it using the Preview Mode. If you’re not familiar with setting up tags and triggers, then take the time to read our Google Tag Manager tutorial for beginners.
We can see that our Google Optimize tag has fired.Creating an Experiment in Google Optimize
Now, let’s create our first experiment. We’ll create an A/B test.
Close the Container settings page.
You will be prompted to name your experience. We recommend that you use a name related to what you intend to test.
For example, if you’re going to change the header of an XYZ page, your experience’s name could be “header-xyz”.
Add in the page you are going to use for your experiment. Simply paste the page URL in the space under What is the URL of the page you’d like to use?
At this point, you’ll be directed to the page of your experiment. There, you’ll find all the details and additional settings specific to your test.
This variant is the new version of the original page we’ve entered before, except that it’ll have the modifications we want to test. This is the ‘B’ of the A/B test.
In our case, we want to replace the text of the search button with an offer. The search button has a text saying “Search”, and we’ll replace it with “Get a 20% discount.”
This is just for demonstration purposes. If you’ve followed our instructions on forecasting micro and macro conversions, then you’ll have a better idea of what to tweak.
Let’s go back to our experiments page to look at the other settings.Targeting and Variants
We’ll focus on two features in targeting and variants: weight and edit.
You probably noticed the default weight proportions of 50% distributed to each variant. But what does it mean? Weight is the amount of traffic that you decide will go to each variant.
The default of 50% means that there’s an equal amount of chance for visitors to see one page or the other.
🚨 Note: Weight proportions can impact your sales and other marketing efforts. Consider the impact of one variant receiving more traffic over some time.
Weight can affect your sales and can impact your affiliates since traffic can be sent on a page with modifications that may turn away visitors.
If you’re not sure how to distribute your weight, you can use this rule of thumb: 75% for the original and 25% for the variant.
We can have some fun now as we edit the visual of the page of our variant.
Here is where the Chrome extensions come in handy.
You will land on your original page, but you’ll notice a bunch of HTML references and an Edit palette.
It’s here that we’ll modify our variant. Remember, we want to modify the search button by replacing its text with a discount offer.
A dropdown list of options will show up. Because we’re only interested in changing the button’s text, we will select Edit text.
You can select the options that suit your needs and skill level.
You can modify whatever you want. For example, you could change the color and size of the button as well, by scrolling down the Edit element palette in the RGB field.
Now that we’ve changed the text, this is what our button looks like now:
Back to our experiment’s page. Let’s continue our settings walkthrough.
Our button needs to be available on our Demoshop website. So, wherever visitors navigate, this button must appear.
We can use the following configuration to make sure our changes remain available on all pages. Let’s use the match type: URL and Contains.
🚨 Note: If your changes must remain only on individual pages, then don’t change the settings.
In Targeting and Variants, the last setting is Audience targeting. We will not use it for this tutorial, but you should definitely have a look.
Audience targeting allows you to show your variants to different groups of users. These can be users coming from different campaigns (i.e., UTM parameters), devices, geography, etc.Measurement and Objectives
Lastly, we’re going to optimize for our objectives.
Objectives are metrics you want to improve. They are essential to assess the performance of your variants and determine which one is the winner. They equate to goals/conversions in Google Analytics.
Objectives for a lead generation site can be form submissions or revenue for eCommerce sites.
This is the reasoning behind having goals or conversions enabled in Google Analytics. Don’t worry if you don’t have any in GA, since Optimize makes it possible to configure them within its platform.
There are 3 types of objectives proposed by Optimize. These are system objectives, Analytics goals, and custom objectives.
System objectives are common goals found across industries such as PageViews, revenue, AdSense revenue, and more.
Google Analytics goals are those you configure in Google Analytics. You will find them in the list of goals in Optimize.
Custom objectives are those you can configure within Google Optimize. They are useful if you don’t have them set up in GA.
In our case, we’ll select Choose from list. The following area is where you can select your objective. Here you find Optimize system objectives and your Google Analytics goals, as well. Here, we chose Pageviews.Description and Checking of Your Installation
We strongly recommend that you add a description of what your test is about. This is best practice, especially if you run multiple experiments or work for different clients.Starting Your Test
Lastly, you can’t launch your test without verifying your installation. On the same page, go to Settings.
Congratulations! Your test is now finally running.How Do I Analyze Google Optimize Results?
Three main factors support the analysis of your results: time, conversions, and the p-value.
First, let’s locate all three of them before we look at how they work together.
These can be found in the Reporting section of your page at the top left corner.
Time-related results are given in the form of a message at the top left corner right under Reporting. Here the message says ‘Optimize experiments need to run for at least two weeks to find a leader’. Google Optimize recommends two weeks.
Your conversions are displayed under the second Experiment column. Since we selected Pageviews in our objectives, we’ll look at the conversion results under ‘Experiments Pageviews’.
🚨 Note: The numbers 1046 and 1640 are not the number of PageViews for each variant. They are the number of conversions.
Lastly, to make sure your results didn’t occur by chance, you need the p-value. You can find it under the Probability to be Best.
We know where our analysis factors are and at this time, we can learn how they come together to identify a winner.Time
💡 Top Tip: Allow your experiments to run for 7 days.
Experiments have time considerations. A test that runs for too long will cost resources and take up time that the company could use to optimize areas for more immediate results.
On the other hand, if the test is too short, your results won’t be reliable.
A good rule is for your tests to stretch past a period of 7 days, not less. Google Optimize recommends a minimum of 2 weeks, it’s not the most optimal but proceed carefully. More than 3 is definitively reliable.
The point is to avoid relying on results that are less than 1 week, no matter how good the conversions or p-values are.
Time also includes audience behaviors.
If, for example, your test was running during a holiday then you should run the test for one additional week.P-value
The p-value tells you if the results of the test happened by chance or from your modifications.
Aim for a 95% (meaning there’s only a 5% probability that the variant won by chance). Lower p-values such as 90% or a bit lower can work. But this depends on the level of risks you’re willing to take.
Also, if your p-values are close to each other, Google Optimize is informing you that there isn’t much difference between your original and variant. Therefore, that promising headline or that new button color isn’t going to have much impact. It’s really up to what you prefer.
There is a visual representation of the p-values on the right side under Modelled Pageviews per Session. If the boxplots overlap or are far apart, they will reflect that distance.How Do They All Come Together?
You can safely proceed with your changes when the success requirements we discussed above are met for each factor.
To illustrate, a test is considered reliable after running for a little more than 2 weeks, with more than 100 conversions and a p-value of 95%.
However, if one of the factors is below the success requirements, the test would be deemed unreliable.
Using the previous example, the experiment would not be reliable if any of these conditions occurred individually:
less than 100 conversions
less than 1-week test
p-value inferior to 85%
If any of one of these occur, you need to wait a little longer before making any decisions.How to Use Google Analytics and Optimize
Google Optimize declares which variant is the winner. Whereas Google Analytics provides insights about that winner.
Remember that optimization is an ongoing process. Today’s winning variant can also negatively impact the subsequent steps of your funnel over time.
This impact can be monitored by looking at your goal/conversion funnels in Google Analytics.
Here’s how this works. The winning variant of a lead magnet landing page helped increase conversions for subscriptions.
Later on, if the subscription goal conversion rate declines, you’ll have to rework that lead magnet landing page.FAQ Why should I use Google Optimize?
There are three main reasons to use Google Optimize:When should I use Google Optimize?
Google Optimize is suitable for websites with a minimum of 4,000 monthly visitors. It is recommended to have a good understanding of your website’s conversion rates and forecasting capabilities before implementing A/B testing. Low-traffic websites can still benefit from A/B testing by focusing on testing macro conversions and using forecasting to predict results.How do I analyze Google Optimize results?
To analyze the results of a Google Optimize experiment, you can use the reporting and analysis features within Google Optimize itself. It provides data on metrics such as conversion rates, engagement, and goal completions for each variant. You can also integrate Google Optimize with Google Analytics to gain further insights and analyze the impact of experiments on user behavior.Summary
By now, navigating through Google Optimize should not be a mystery to you.
We’ve covered everything you need to know about A/B testing. We’ve also equipped you with the tools, frameworks, and strategies to set up your tests and analyze results like a pro.
Learn how to monetize your analytics skills with our handy guide on how to make money selling analytics services.
Update the detailed information about Photoshop Layer Mask Basics For Beginners on the Tai-facebook.edu.vn website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!