Trending December 2023 # Coquitts: A Python Library For Text # Suggested January 2024 # Top 19 Popular

You are reading the article Coquitts: A Python Library For Text updated in December 2023 on the website Tai-facebook.edu.vn. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Coquitts: A Python Library For Text

CoquiTTS is a Python text-to-speech synthesis library. It uses cutting-edge models to transform any text into natural-sounding speech. CoquiTTS can be used to create audio content, improve accessibility, and add voice interactivity to your applications. In this article, you will learn how to install and use CoquiTTS in Python.

The Coqui AI team created CoquiTTS, an open-source speech synthesis program that uses Python text to speech. The software is designed to meet the specific needs of low-resource languages, making it an extremely effective tool for language preservation and revitalization efforts around the world.

CoquiTTS: A Powerful Python Text to Speech Speech Synthesis Tool

When using Python text to speech, CoquiTTS is also faster than other speech synthesis tools. It can generate real-time speech, making it ideal for voice assistants, text-to-speech systems, and interactive voice response (IVR) systems that use Python text to speech. This performance is achieved using neural vocoding, a technique that compresses the neural network used for voice synthesis into a lower file size, resulting in faster and more efficient processing when utilizing Python text to speech.

Using Python Text to Speech to Empower Low-Resource Languages with CoquiTTS

Speech synthesis technology has the potential to be useful for a wide range of applications, but it is especially important for low-resource languages that use Python text to speech. Due to globalization, urbanization, and the dominance of more regularly spoken languages, these languages frequently confront issues in conserving and maintaining their linguistic history.

CoquiTTS, which uses Python text-to-speech, provides an effective solution for addressing these issues by supporting language preservation and revitalization activities for low-resource languages. CoquiTTS can be used to develop speech synthesisers for such languages, allowing speakers to access information and communicate with others more easily using Python text to speech. CoquiTTS can also be used to construct speech interfaces for mobile devices, smart speakers, and home appliances, making technology more accessible to low-resource language speakers.

CoquiTTS has been successfully implemented in a number of languages utilising Python text-to-speech technology. Kinyarwanda, a Bantu language spoken in Rwanda and neighboring countries that has struggled to preserve its linguistic heritage, was utilized to develop a speech synthesizer utilizing CoquiTTS and Python text-to-speech. The Kinyarwanda Speech Synthesis Project gathered Kinyarwanda speech samples, trained the neural network utilized by CoquiTTS, and built a high-quality speech synthesizer. This synthesizer has the potential to help Kinyarwanda speakers in a range of applications.

To use CoquiTTS in Python, you can follow these steps:

Installing CoquiTTS using pip:

pip install coqui-tts

If you plan to code or train models, clone TTS and install it locally.

pip install -e .[all,dev,notebooks] # Select the relevant extras

If you are on Ubuntu (Debian), you can also run following commands for installation.

$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS. $ make install Docker Image

You can also try TTS without install with the docker image. Simply run the following command and you will be able to run TTS without installing it.

python3 TTS/server/server.py –list_models #To get the list of available models python3 TTS/server/server.py –model_name tts_models/en/vctk/vits # To start a server

Synthesizing speech by TTS from chúng tôi import TTS # Running a multi-speaker and multi-lingual model # List available 🐸TTS models and choose the first one model_name = TTS.list_models()[0] # Init TTS tts = TTS(model_name) # Run TTS # ❗ Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language # Text to speech with a numpy output wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0]) # Text to speech to a file tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav") # Running a single speaker model # Init TTS with the target model name tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False, gpu=False) # Run TTS tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH) # Example voice cloning with YourTTS in English, French and Portuguese: tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=True) tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav") tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav") tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav") # Example voice conversion converting speaker of the `source_wav` to the speaker of the `target_wav` tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False, gpu=True) tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav") # Example voice cloning by a single speaker TTS model combining with the voice conversion model. This way, you can # clone voices by using any model in 🐸TTS. tts = TTS("tts_models/de/thorsten/tacotron2-DDC") tts.tts_with_vc_to_file( "Wie sage ich auf Italienisch, dass ich dich liebe?", speaker_wav="target/speaker.wav", file_path="ouptut.wav" ) # You should set the `COQUI_STUDIO_TOKEN` environment variable to use the API token. # If you have a valid API token set you will see the studio speakers as separate models in the list. models = TTS().list_models() # Init TTS with the target studio speaker tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False, gpu=False) # Run TTS tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH) # Run TTS with emotion and speed control tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5) Command line tts Single Speaker Models

List provided models:

$ tts --list_models

Get model info (for both tts_models and vocoder_models):

Query by type/name: The model_info_by_name uses the name as it from the –list_models.

For example:

$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts $ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2

Query by type/idx: The model_query_idx uses the corresponding idx from –list_models.

For example:

$ tts --model_info_by_idx tts_models/3

Run TTS with default models:

$ tts --text "Text for TTS" --out_path output/path/speech.wav

Run a TTS model with its default vocoder model:

For example:

$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav

Run with specific TTS and vocoder models from the list:

For example:

$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav

Run your own TTS model (Using Griffin-Lim Vocoder):

$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav

Run your own TTS and Vocoder models:

$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav --vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json Multi-speaker Models

Run the multi-speaker TTS model with the target speaker ID:

Run your own multi-speaker TTS model:

Also Read Bark: Text to Speech New AI tool.

You're reading Coquitts: A Python Library For Text

Steps For Effective Text Data Cleaning (With Case Study Using Python)

Introduction

One of the first steps in working with text data is to pre-process it. It is an essential step before the data is ready for analysis. Majority of available text data is highly unstructured and noisy in nature – to achieve better insights or to build better algorithms, it is necessary to play with clean data. For example, social media data is highly unstructured – it is an informal communication – typos, bad grammar, usage of slang, presence of unwanted content like URLs, Stopwords, Expressions etc. are the usual suspects.

In this blog, therefore I discuss about these possible noise elements and how you could clean them step by step. I am providing ways to clean data using Python.

As a typical business problem, assume you are interested in finding:  which are the features of an iPhone which are more popular among the fans. You have extracted consumer opinions related to iPhone and here is a tweet you extracted:

[stextbox id = “grey”] [/stextbox]

Steps for data cleaning:

[/stextbox]

Here is what you do:

Escaping HTML characters: Data obtained from web usually contains a lot of html entities like &lt; &gt; &amp; which gets embedded in the original data. It is thus necessary to get rid of these entities. One approach is to directly remove them by the use of specific regular expressions. Another approach is to use appropriate packages and modules (for example htmlparser of Python), which can convert these entities to standard html tags. For example: &lt; is converted to “<” and &amp; is converted to “&”.



Decoding data: Thisis the process of transforming information from complex symbols to simple and easier to understand characters. Text data may be subject to different forms of decoding like “Latin”, “UTF8” etc. Therefore, for better analysis, it is necessary to keep the complete data in standard encoding format. UTF-8 encoding is widely accepted and is recommended to use.

[stextbox id = “grey”]

Snippet:

tweet = original_tweet.decode("utf8").encode(‘ascii’,’ignore’)

Output:

[/stextbox]

Apostrophe Lookup: To avoid any word sense disambiguation in text, it is recommended to maintain proper structure in it and to abide by the rules of context free grammar. When apostrophes are used, chances of disambiguation increases.

For example “it’s is a contraction for it is or it has”.

All the apostrophes should be converted into standard lexicons. One can use a lookup table of all possible keys to get rid of disambiguates.

[stextbox id = “grey”]

Snippet:

APPOSTOPHES = {“'s" : " is", "'re" : " are", ...} ## Need a huge dictionary words = tweet.split() reformed = [APPOSTOPHES[word] if word in APPOSTOPHES else word for word in words] reformed = " ".join(reformed)

Outcome:

[/stextbox]

Removal of Stop-words: When data analysis needs to be data driven at the word level, the commonly occurring words (stop-words) should be removed. One can either create a long list of stop-words or one can use predefined language specific libraries.

Removal of Punctuations: All the punctuation marks according to the priorities should be dealt with. For example: “.”, “,”,”?” are important punctuations that should be retained while others need to be removed.

Removal of Expressions: Textual data (usually speech transcripts) may contain human expressions like [laughing], [Crying], [Audience paused]. These expressions are usually non relevant to content of the speech and hence need to be removed. Simple regular expression can be useful in this case.

Split Attached Words: We humans in the social forums generate text data, which is completely informal in nature. Most of the tweets are accompanied with multiple attached words like RainyDay, PlayingInTheCold etc. These entities can be split into their normal forms using simple rules and regex.

[stextbox id = “grey”]

Snippet:

cleaned = “ ”.join(re.findall(‘[A-Z][^A-Z]*’, original_tweet))

Outcome:

[/stextbox]

Slangs lookup: Again, social media comprises of a majority of slang words. These words should be transformed into standard words to make free text. The words like luv will be converted to love, Helo to Hello. The similar approach of apostrophe look up can be used to convert slangs to standard words. A number of sources are available on the web, which provides lists of all possible slangs, this would be your holy grail and you could use them as lookup dictionaries for conversion purposes.

[stextbox id = “grey”]

Snippet:

            tweet = _slang_loopup(tweet)

Outcome:

[/stextbox]

Standardizing words: Sometimes words are not in proper formats. For example: “I looooveee you” should be “I love you”. Simple rules and regular expressions can help solve these cases.

[stextbox id = “grey”]

Snippet:

tweet = ''.join(''.join(s)[:2] for _, s in itertools.groupby(tweet))

Outcome:

[/stextbox]

[stextbox id = “grey”]

[stextbox id = “grey”]

Final cleaned tweet:

[/stextbox]

[/stextbox]

Advanced data cleaning:

Grammar checking: Grammar checking is majorly learning based, huge amount of proper text data is learned and models are created for the purpose of grammar correction. There are many online tools that are available for grammar correction purposes.

Spelling correction: In natural language, misspelled errors are encountered. Companies like Google and Microsoft have achieved a decent accuracy level in automated spell correction. One can use algorithms like the Levenshtein Distances, Dictionary Lookup etc. or other modules and packages to fix these errors.

End Notes:

Go Hack 🙂

If you like what you just read & want to continue your analytics learning, subscribe to our emails, follow us on twitter or like our facebook page.

Related

Google: Alt Text Only A Factor For Image Search

Google’s use of alt text as a ranking factor is limited to image search. For web search, alt text is treated as regular on-page text.

This is explained by Google’s Search Advocate John Mueller during the Google Search Central SEO office-hours hangout recorded on March 18.

Mueller fields several questions related to alt text, resulting in a number of takeaways about the impact it has on SEO.

Adding alt attributes to images is recommended from an accessibility standpoint, as it’s helpful for visitors who rely on screen readers.

From an SEO standpoint, alt text is recommended when your goal is to have an image rank in image search.

As Mueller explains, alt text doesn’t add value to a page when it comes to ranking in web search.

Alt Text Is For Image Search

In the question that relates to the title of this article, Mueller is asked if alt text should be used for decorative images.

That’s a judgement call, Mueller says.

From an SEO point of view, the decision to use alt text depends on whether you care about the images showing up in image search.

Google doesn’t see a page as more valuable to web search because it has images with alt text.

When it comes to using alt text in general, Mueller recommends focusing on the accessibility aspect rather than the SEO aspect.

“I think it’s totally up to you. So I can’t speak for the accessibility point of view, so that’s the one angle that is there. But from an SEO point of view the alt text really helps us to understand the image better for image search. And if you don’t care about this image for image search, then that’s fine do whatever you want with it.

That’s something for decorative images, sometimes you just don’t care. For things like stock photos where you know that the same image is on lots of other sites, you don’t care about image search for that. Do whatever you want to do there. I would focus more on the accessibility aspect there rather than the pure SEO aspect.

It’s not the case that we would say a textual webpage has more value because it has images. It’s really just we see the alt text and we apply that to the image, and if someone searches for the image we can use that to better understand the image. It’s not that the webpage in the text web search would rank better because it has an image.”

Hear Mueller’s full response in the video below. Continue reading the next sections for more insights about alt text.

The SEO Impact Of Alt Text

In another question about alt text, Mueller is asked if it’s still worth using alt text when the image itself has text in it.

Mueller recommends avoiding using text in images altogether, but says yes – alt text could still assist in this case.

“I think, ideally, if you have text and images it probably makes sense to have the text directly on the page itself. Nowadays there are lots of ways to creatively display text across a website so I wouldn’t necessarily try to use text in images and then use the alt text as a way to help with that. I think the alt text is a great way to help with that, but ideally it’s better to avoid having text in images.”

The question goes on to ask if alt text would be useful when there’s text on the page describing what’s in the image.

In this case, from an SEO point of view, the text on the page would be enough for search engines.

However, it would still make sense to use alt text for people who use screen readers.

“From a more general point of view, the alt text is meant as a replacement or description of the image, and that’s something that is particularly useful for people who can’t look at individual images, who use things like screen readers, but it also helps search engines to understand what this image is about.

If you already have the same description for a product around the image, for search engines we kind of have what we need, but for people with screen readers maybe it still makes sense to have some kind of alt text for that specific image.”

Alt Text Should Be Descriptive

Mueller emphasizes the importance of using descriptive alt text.

The text should describe what’s in the image for people who aren’t able to view it.

Avoid using generic text, like repeating product names over and over.

“In a case like this I would avoid the situation where you’re just repeating the same thing over and over. So avoid having like the title of a product be used as an alt text for the image, but rather describe the image in a slightly different way. So that’s kind of the recommendation I would have there.

I wouldn’t just blindly copy and paste the same text that you already have on a page as an alt text for an image because that doesn’t really help search engines and it doesn’t really help people who rely on screen readers.”

Hear Mueller’s full response in the video below:

Featured Image: Screenshot from chúng tôi March 2023. 

Python Treatment For Outliers In Data Science

What is Feature Engineering?

When we have a LOT OF FEATURES in the given dataset, feature engineering can become quite a challenging and interesting module.

The number of features could significantly impact the model considerably, So that feature engineering is an important task in the Data Science life cycle.

Feature Improvements

In the Feature Engineering family, we are having many key factors are there, let’s discuss Outlier here. This is one of the interesting topics and easy to understand in Layman’s terms.

Outlier

An outlier is an observation of a data point that lies an abnormal distance from other values in a given population. (odd man out)

Like in the following data point (Age)

18,22,45,67,89,125,30

An outlier is an object(s) that deviates significantly from the rest of the object collection.

List of Cities

New York, Las Angles, London, France, Delhi, Chennai

It is an abnormal observation during the Data Analysis stage, that data point lies far away from other values.

List of Animals

cat, fox, rabbit, fish

An outlier is an observation that diverges from well-structured data.

The root cause for the Outlier can be an error in measurement or data collection error.

Quick ways to handling Outliers.

Outliers can either be a mistake or just variance. (As mentioned, examples)

If we found this is due to a mistake, then we can ignore them.

If we found this is due to variance, in the data, we can work on this.

In the picture of the Apples, we can find the out man out?? Is it? Hope can Yes!

But the huge list of a given feature/column from the .csv file could be a really challenging one for naked eyes.

First and foremost, the best way to find the Outliers are in the feature is the visualization method.

What are the Possibilities for an Outlier? 

Of course! It would be below quick reasons.

Missing values in a dataset.

Data did not come from the intended sample.

Errors occur during experiments.

Not an errored, it would be unusual from the original.

Extreme distribution than normal.

That’s fine, but you might have questions about Outlier if you’re a real lover of Data Analytics, Data mining, and Data Science point of view.

Let’s have a quick discussion on those.

Understand more about Outlier

Outliers tell us that the observations of the given data set, how the 

data point(s) differ significantly from the overall perspective. Simply saying 

odd one/many. this would be an 

error during 

data collection. 

Generally, 

Outliers

affect

 statistical results while doing the EDA process, we could say a quick example is the MEAN and MODE of a given set of data set, which will be misleading that the 

data

values would be higher than they really are.

Positive Relationship 

When the correlation coefficient is closer to value 1

 Negative Relationship

When the correlation coefficient is closer to value -1

Independent

When X and Y are independent

, then the

correlation coefficient

is close to

 zero (0)

We could understand the data collection process from the Outliers and its observations. An analysis of how it occurs and how to minimize and set the process in future data collection guidelines.

Even though the Outliers increase the inconsistent results in your dataset during analysis and the power of statistical impacts significant, there would challenge and roadblocks to remove them in few situations.

DO or DO NOT (Drop Outlier)

Before dropping the Outliers, we must analyze the dataset with and without outliers and understand better the impact of the results.

If you observed that it is obvious due to incorrectly entered or measured, certainly you can drop the outlier. No issues on that case.

If you find that your assumptions are getting affected, you may drop the outlier straight away, provided that no changes in the results.

If the outlier affects your assumptions and results. No questions simply drop the outlier and proceed with your further steps.

Finding Outliers

So far we have discussed what is Outliers, how it affects the given dataset, and Either can we drop them or NOT. Let see now how to find from the given dataset. Are you ready!

We will look at simple methods first, Univariate and Multivariate analysis.

Univariate method: I believe you’re familiar with Univariate analysis, playing around one variable/feature from the given data set. Here to look at the Outlier we’re going to apply the BOX plot to understand the nature of the Outlier and where it is exactly.

Let see some sample code. Just I am taking chúng tôi as a sample for my analysis, here I am considering age for my analysis.

plt.figure(figsize=(5,5)) sns.boxplot(y='age',data=df_titanic)



You can see the outliers on the top portion of the box plot visually in the form of dots.

Multivariate method: Just I am taking titanic.csv as a sample for my analysis, here I am considering age and passenger class for my analysis.

plt.figure(figsize=(8,5)) sns.boxplot(x='pclass',y='age',data=df_titanic)

We can very well use Histogram and Scatter Plot visualization technique to identify the outliers.

mathematically to find the Outliers as follows Z-Score and Inter Quartile Range (IQR) Score methods

Z-Score method: In which the distribution of data in the form mean is 0 and the standard deviation (SD) is 1 as Normal Distribution format.

Let’s consider below the age group of kids, which was collected during data science life cycle stage one, and proceed for analysis, before going into further analysis, Data scientist wants to remove outliers. Look at code and output, we could understand the essence of finding outliers using the Z-score method.

import numpy as np kids_age = [1, 2, 4, 8, 3, 8, 11, 15, 12, 6, 6, 3, 6, 7, 12,9,5,5,7,10,10,11,13,14,14] mean = np.mean(voting_age) std = np.std(voting_age) print('Mean of the kid''s age in the given series :', mean) print('STD Deviation of kid''s age in the given series :', std) threshold = 3 outlier = [] for i in voting_age: z = (i-mean)/std outlier.append(i) print('Outlier in the dataset is (Teen agers):', outlier) Output

The outlier in the dataset is (Teenagers): [15]

(IQR) Score method: In which data has been divided into quartiles (Q1, Q2, and Q3). Please refer to the picture Outliers Scaling above.  Ranges as below.

25th percentile of the data – Q1

50th percentile of the data – Q2

75th percentile of the data – Q3

Let’s have the junior boxing weight category series from the given data set and will figure out the outliers.

import numpy as np import seaborn as sns # jr_boxing_weight_categories jr_boxing_weight_categories = [25,30,35,40,45,50,45,35,50,60,120,150]  Q1 = np.percentile(jr_boxing_weight_categories, 25, interpolation = 'midpoint') Q2 = np.percentile(jr_boxing_weight_categories, 50, interpolation = 'midpoint') Q3 = np.percentile(jr_boxing_weight_categories, 75, interpolation = 'midpoint') IQR = Q3 - Q1 print('Interquartile range is', IQR) low_lim = Q1 - 1.5 * IQR up_lim = Q3 + 1.5 * IQR print('low_limit is', low_lim) print('up_limit is', up_lim) outlier =[] for x in jr_boxing_weight_categories: outlier.append(x) print(' outlier in the dataset is', outlier) Output

the outlier in the dataset is [120, 150]

sns.boxplot(jr_boxing_weight_categories)

Loot at the boxplot we could understand where the outliers are sitting in the plot.

So far, we have discussed what is Outliers, how it looks like, Outliers are good or bad for data set, how to visualize using matplotlib /seaborn and stats methods.

Now, will conclude correcting or removing the outliers and taking appropriate decision. we can use the same Z- score and (IQR) Score with the condition we can correct or remove the outliers on-demand basis. because as mentioned earlier Outliers are not errors, it would be unusual from the original.

Hope this article helps you to understand the Outliers in the zoomed view in all aspects. let’s come up with another topic shortly. until then bye for now! Thanks for reading! Cheers!!

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

How Can I Subtract A Day From A Python Date?

Introduction

It is essential to have a module that can modify date and time since, as we all know, they are utilized in applications where we must keep track of date and time. A DateTime module in Python deals with dates and times. Python comes with a built-in datetime module.

Installation of two new libraries is necessary before any data modification may take place.

Dates and timings are quickly retrieved using the arrow library.

A DataFrame may be accessed and used thanks to the Pandas library.

Go to an IDE console to install these libraries. Run the code below at the ($) command prompt. The command prompt for the terminal used in this illustration is denoted by a dollar symbol ($). The prompt on your terminal can be different.

Methods used

time − It shows time, independent of any particular day with attributes are hour, minute, second, microsecond, and tzinfo.

timedelta − It is used to manipulate Date.

date − It shows the date according to the Georgian calendar with attributes are year, month and day.

tzinfo − It provides information about the Time zone.

datetime − It is a collection of Dates and Times with the attributes year, month, day, hour, minute, second, microsecond, and tzinfo.

Syntax class datetime.timedelta(days=10, month ,hour, minute, second, microsecond, tzinfo) Returns: Date

Note − if we doesn’t specify by default it takes integer as an day.

Algorithm

Initialise a string and use today method to retrieve the current date and split method to split it into a List.

Declare a new variable which calls datetime.date() and takes three arguments: current year, current month and day.

Declare a variable which uses timedelta and passes an integer which is the number of days to subtract from the original day.

Return the difference of the datetime.date() variable and timedelta variable.

Method 1: Use datetime.timedelta() Example

This method retrieves the current date as a string and splits it into a List. Then, the current date (payday) is configured, and ten (10) days are subtracted (datetime.timedelta(10)) from same to return a new date.

import

datetime

from

datetime

import

date

import

pandas

as

pd

today

=

str

(

date

.

today

(

)

)

.

split

(

‘-‘

)

payday

=

datetime

.

date

(

int

(

today

[

0

]

)

,

int

(

today

[

1

]

)

,

25

)

chqday

=

datetime

.

timedelta

(

10

)

n_payday

=

payday

chqday

print

(

“Payday =”

,

payday

)

Output Payday = 2023-11-25 Code Explanation

Declares today which retrieves the current date (yyyy-mm-dd), and splits the date string on the hyphen (split(‘-‘)). This returns the current date as a List of Strings [‘2023′, ’05’, ’27’]. Declares payday which calls datetime.date() and takes three (3) integer arguments: current year (int(get_today[0])), current month (int(get_today[1])), and day, (25). Declares chqday which uses timedelta and passes an integer, (10) which is the number of days to subtract from the original day (25). Declares n_payday which subtracts payday from chqday. Finally, the output of n_paydy is sent to the terminal.

Method 2: Use Pandas to subtract date columns Example

What if you want to determine the difference between two dates but don’t want to establish a new one? In this example, two dates are subtracted from one another, and the result is output as the difference in days.

import

datetime

from

datetime

import

date

import

pandas

as

pd

df

=

pd

.

DataFrame

(

columns

=

[

‘hired’

,

‘fired’

]

)

df

.

hired

=

[

‘2023-09-07’

,

‘2023-10-29’

]

df

.

fired

=

[

‘2023-09-07’

,

‘2023-04-29’

]

df

.

hired

=

pd

.

to_datetime

(

df

.

hired

)

df

.

fired

=

pd

.

to_datetime

(

df

.

fired

)

diff

=

(

df

.

fired

df

.

hired

)

print

(

diff

)

Output 0 -365 days 1 -183 days dtype: timedelta64[ns] Code Explanation

First, a DataFrame with the columns recruited and dismissed is generated. The outcome is saved to df. The next two lines add two rows to the DataFrame df and save the data to the relevant variable (df.hired or df.fired). Then a Datetime object is created from these two lines and stored to the relevant variable stated above. It subtracts the two dates and saves the result to diff.

Conclusion

datetime is a collection of Dates and Times with the attributes year, month, day, hour, minute, second, microsecond, and tzinfo. timedelta() is used to manipulate date. Calculate the difference between timedelta and chúng tôi variable to return the desired output i.e to subtract a day from Python date.

How To Flip Text On A Path In Illustrator

Adobe Illustrator is a great graphic design program to use when you want to create editable vector graphics. A vector graphic can be scaled up without losing any detail, so you can make an illustration, including typography, that will look as good on a billboard as a business card.

Imagine creating a round badge or logo, and you want to type text around a circular path. You might want the text at the bottom of the circle to flip to the opposite side of the path, so it’s easily legible. In this Illustrator tutorial, we’ll teach you how to flip text on a path in Illustrator, so the text isn’t upside down.

Table of Contents

How to Flip Type on a Path in Illustrator

Whether using Adobe Illustrator CC or an earlier version of Illustrator, a path is simply one (or more!) straight or curved lines. A path can be open or closed, depending on whether the endpoints are joined together.

We’ll start with a simple circle design. We’ll create a circular path, and then we’ll use the type tool to type text along that path. Finally, we’ll flip some of the text, so it appears right side up along the bottom of the circle.

Select the Ellipse tool. 

Hold down the Shift key and draw a circle on the canvas. Holding the Shift key will force the ellipse you create into a perfect circle. Any stroke or fill color will disappear once you add text.

In the Type Tool flyout menu, choose the Type on a Path Tool. 

Enter the text you want at the top of the circle. 

You’ll see three handles (also called alignment brackets) near the text: one to the left, one in the middle, and one on the right. Use these handles to rotate the text around the circle until it’s right where you want it.

In the Layers panel, turn off the visibility of the bottom layer. 

Select the Type Tool, select the text on the path, and type the new text—the text you’ll move to the bottom of the circle path.

Note: For the Align to Path options, choosing Baseline will put the text right on the path. Ascender puts the text on the outside of the circle. Descender will locate the text on the inside of the circle. Lastly, Center will place the text right at the center of the path.

Next, turn the visibility of the top layer of text back on.

That’s how you add and flip text on a path in Adobe Illustrator. 

Insert a Symbol in Your Design

An easy way to add an extra element or two to a design in Adobe Illustrator is to insert something from the Symbols panel. Follow the steps below to add a symbol to your design. 

Use the Symbols Library dropdown arrow to view a list of all the libraries installed on your computer. Select one of them to launch a panel where you can use navigation arrows to page through each Symbols library. 

When you find a symbol you want to use, drag and drop it into your design.

Use the Selection Tool to resize the symbol to fit your design.

And if you’re beginning to use Adobe Indesign, you’ll want to check out our tutorials on how to link text boxes or flow text around an image.

Update the detailed information about Coquitts: A Python Library For Text on the Tai-facebook.edu.vn website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!