Trending February 2024 # Manufacturing’s Missing Middle: Solving The Riddle # Suggested March 2024 # Top 4 Popular

You are reading the article Manufacturing’s Missing Middle: Solving The Riddle updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Manufacturing’s Missing Middle: Solving The Riddle

Mid-sized enterprises that have the scale and sophistication to compete more effectively in the modern global economy are missing, writes Melina Morrison, CEO of the Business Council of Co-operatives and Mutuals.

Photo by STR/AFP via Getty Images

Throughout the height of the Covid pandemic, sovereign risk became the catch cry and supply chain security the dictum. Australia it turns out, was ill prepared for the disruption in global supply of key ingredients and overly reliant on offshore manufacturing capability. The hollowing out of Australia’s once lauded manufacturing base not only showed up as trade deficits in categories of value-added manufactured goods, it showed up in a crisis as vulnerability.

Nowadays, as government and business spruiks the idea the that best recovery is to ‘build back better’ we are forced to face certain hard realities. Skilled labour shortages are not the only issue we face in building stronger, more resilient supply chains because manufacturing in this country has a bigger problem. It has a missing middle. 

What exactly do I mean by a missing middle?  It is evidenced in the profile of today’s Australian business landscape; one that features many sub-scale small businesses and just a small number of very large businesses.  Fully 90% of Australian businesses are small and 70% of those have fewer than 20 employees.  Essentially, Australia has a much larger proportion of micro-enterprises than many other countries. What is mostly missing is those mid-sized enterprises that have the scale and sophistication to compete more effectively in the modern global economy.

“The diminished diversity and maturity that might otherwise have been provided by mid-sized businesses has reduced Australia’s resilience in the face of all these challenges.”

The need for a larger mid-sized business sector to produce more finished goods onshore was highlighted by three years of disrupted global supply chains, as the COVID pandemic left a trail of economic destruction.

It is not just the rampaging virus that has exposed weaknesses.  There have been shocks on the trade front, in labour markets, from climate-related catastrophes and due to war-fuelled energy uncertainty and inflation pressures.  The diminished diversity and maturity that might otherwise have been provided by mid-sized businesses has reduced Australia’s resilience in the face of all these challenges.

In recent years, the missing middle has also contributed to the nation’s low productivity growth.  That has a cascading impact of making it more difficult to attract and retain workers through improving wages and conditions.

A greater dependence on the global economy has arisen as once thriving local industries have vanished offshore.  One consequence has been the erosion of the nation’s taxation base as the pool of Australian taxed entities shrinks.  At the same time, vulnerabilities across many supply chains have also increased as a result.

There is no reason Australia cannot be doing more to re-establish greater self-sufficiency.  The key will be to focus on that missing middle tier of businesses.  Why not aspire to Germany’s mid-sized business sector, the “Mittelstand”, or an Australian version of the successful co-operative business clusters in Italy’s Emilia Romagna or Spain’s Basque region?

Photo by Alexander Koerner/Getty Images

I am firmly of the view that filling in the missing middle of Australian manufacturing can be achieved through co-operatives.  Co-ops aggregate smaller businesses, allowing them to share costs, develop value adding activities and access local and export markets by working together.

There are emerging examples of success that should be acknowledged and which could one day emerge as stunning success stories.  One example is the salad bowl of southeast Queensland which needs a local facility including a deep-freezing line to process more fruits and vegetables for export markets.  The Lockyer Valley grows the most diverse range of vegetables in Australia, but has been without a local food processing facility since 2011 when a large player located in Northgate, Queensland relocated its operations overseas. For the past decade or more Lockyer Valley Fruit and Veggie Co-op has been striving to repatriate fruit and vegetable processing to the region, and is on the verge of achieving its mission. 

“If we lack the courage to build it back up, living standards will be at risk.”

In the New South Wales Hunter Valley, the HunterNet Co-op is an established industry cluster of more than 170 small and medium manufacturers.  The innovation-focused manufacturing network operates across mining, defence, energy, infrastructure, the environment, medical technology and agribusiness.  It has played a significant role in Newcastle’s feted manufacturing base and could be a model for other parts of regional Australia.

They facilitate investment in new areas of opportunity and R&D. The burgeoning field of robotics in Australia is just one industry of the future that will require the skills, scale, growth potential and capital base of mid-sized enterprises in order to flourish.

Those mid-sized co-operative and mutual enterprises that Australia does have already are often major employers in regional towns and hubs for regional economies.  It makes sense to encourage their expansion and diversification as a way of maintaining and increasing high quality jobs, wages growth and market access.

The expression “no guts, no glory” has been attributed to American Air Force Major General Frederick Corbin Blesse, who penned an air-to-air combat manual of the same name in 1955.  Make no mistake, Australia is in an economic battle and we need a laser-like focus on future-proofing our industries and co-operatively protecting our mutual prosperity.

If we lack the courage to build it back up, living standards will be at risk.

Melina Morrison is CEO of the Business Council of Co-operatives and Mutuals

You're reading Manufacturing’s Missing Middle: Solving The Riddle

Ecovacs Deebot N79S Review: Middle Of The Road

Our Verdict

The Ecovacs Deebot N79S is a more affordable robot vacuum cleaner than many, which immediately makes it attractive. While it’s got some handy elements such as a remote control and Alexa support, the overall features and performance are basic. We’d like more cleaning power, especially on carpets, and the way it navigates can easily leave areas untouched.

Getting a robot vacuum cleaner to do the hard work for you is great, but they can be really expensive. Well the Ecovacs Deebot N79S provides a more affordable option.


Even the high-end Deebot Ozmo 930 is cheaper than some rivals but £549/$599 will be too much for a lot of consumers. After all, a robot vacuum isn’t capable of being your only cleaner.

At a much more reasonable £249/$299, the Deebot N79S is a more stomachable price point. You can buy it from Amazon and Best Buy.

This puts it in competition with the iLife A7 and Eufy RoboVac 30C. Check out the best robot vacuum cleaners in our chart.

Design & Build: Classic

The N79S is your quintessential robot vacuum when it comes to design. It’s like a huge ice hockey puck: flat, round and black.

It looks pretty much like the more premium Ozmo 930 but doesn’t have the traffic control tower-like addition on the top which houses sensors.

The device is easy to use with a master power switch on the side and then an Auto button accompanied with LEDs on top. The dust compartment comes out at the back, which is used as a water reservoir on the 930.

You get a docking station for the N79S to charge and the vacuum even comes with a remote control making it even easier to use. You can even hook it up to Alexa for voice activation.

Also in the box are the two rotating brushes you’ll need to attach and the main brush which sits underneath. Do bear in mind that the robot plus the dock takes up a fair bit of space so make sure you have somewhere convenient for it to live first.

Features & Performance: You suck

The N79S is a lot more basic than the Ozmo 930, hence the price difference. This robot vacuum pretty much just does normal cleaning, but can handle both carpet and hard floors.

It doesn’t have mopping or any intelligent navigation so while the 930 cleverly goes up and down your floor methodically, the N79S will just go until it finds an obstacle. It then turns and sets off again. The end result means it bounces round the room sort of aimlessly like a ball in the classic game, Breakout.

There are some other cleaning modes available though including spot clean, edge clean and max mode. Using the app you can even drive the N79S around like a remote control car, which we found was often then best way to target dirty areas but defeats the point of it doing the work for you.

Sensors do stop it falling down stairs etc and generally they work well but we were miffed when the N79S got stuck by a chair and desk when there was plenty of space for it to drive away. It just went round in circles until we physically moved it.

Generally, the cleaning performance is good but we’d like it to be better. In normal mode (not Max), the N79S will pick up loose dirt but struggles to deal with anything more embedded into carpet. This combined with the way it navigates the room means that there will be unclean areas.

We’ve also noticed the vacuum skipping and bobbling over carpet frequently despite there being nothing obvious to cause this.


The N79S might be cheaper than the Ozmo 930 and other rivals, but there’s good reason for that. It doesn’t have the same power, attachments and extra features like the ability to mop.

For some, this will be a perfectly good amount of sacrifice in order to afford one in the first place. Just bear in mind that it’s generally basic on the whole so don’t expect immaculately clean floors.

You can make good use of the remote, app and Alexa support to get the most out of it so it’s still a good choice.

Specs Ecovacs Deebot N79S: Specs

2x Side brushes

1x Main brush

Docking station

Remote control


App support

Alexa or Google Home compatible

Auto-clean, spot mode, edge mode & max mode

Up to 120 min runtime


Samsung Galaxy S10 Review: Finding The Middle Ground Is Hard

Read more: Here’s everything new in Samsung One UI 3.0

Here is Android Authority‘s Samsung Galaxy S10 review.

About our Samsung Galaxy S10 review: We tested the Samsung Galaxy S10 on T-Mobile’s network in New Jersey, New York City, San Diego, and Los Angeles over the course of 10 days. It ran Android 9 Pie with Samsung’s OneUI v1.1. The review unit was provided to Android Authority by T-Mobile.

Samsung Galaxy S10 review: The big picture


Aluminum chassis

Gorilla Glass 6

Nano SIM / MicroSD memory card

150 x 70 x 7.8 mm


3.5mm headphone jack

Fingerprint reader (under display)

Black, Blue, Pink, White




6.1-inch Quad HD+ Super AMOLED

3,040 by 1,440 pixels with 551ppi

19:9 aspect ratio

Single selfie cutout


Snapdragon 855

2.8GHz octa-core, 7nm process


128GB storage

The S10 is among the first to ship with the Snapdragon 855, the top-of-the-line chip from Qualcomm. All the base Galaxy S10 devices include a minimum 8GB of RAM, which is stellar.

It should come as no surprise that the S10 crushed the usual trio of benchmarks. It scored 5,641/4,831 on the 3DMark Sling Shot Extreme for OpenGL ES and Vulkan, respectively. That’s better than 90 percent of competing devices. Similarly, it amassed an impressive 354718 in GeekBench. This score bested 90 percent of other phones, as well. Last, for AnTuTu the S10 churned out 3,423 / 10,340 for single- and multi-core tests, respectively.

After an initial hiccup that necessitated a factory reset, we’ve seen nothing but excellent performance from the Galaxy S10.


3,400mAh Lithium ion

Qualcomm Quick Charge 2.0

Qi wireless charging

Wireless PowerShare

Wireless PowerShare is more gimmick than gimme.

Samsung provides plenty of control over how the phone draws power. The easiest way will be to select the power mode that best matches your needs at the time. The phone ships in optimized mode, which balances performance and battery life. You can jump to high performance for gaming, or dial back to medium power saving mode or maximum power saving mode when you need to conserve power.

Rear cameras:

12MP 2x telephoto sensor, autofocus, OIS, 45-degree FoV, ƒ/2.4 aperture

12MP wide-angle sensor, autofocus, OIS, 77-degree FoV, dual  ƒ/1.5 and ƒ/2.4 apertures

16MP ultra-wide sensor, 123-degree FoV, ƒ/2.2 aperture

Front camera:

10MP sensor, autofocus, 80-degree FoV, ƒ/1.9 aperture

Last up, video. The Galaxy S10 can shoot video up to 4K at various frame rates. I was pleased with the results, which were more consistently good than results from the still camera. Sound captured along with the video is also quite good.

Full-resolution photo samples from the Samsung Galaxy S10 are available here.


3.5mm headphone jack

Bluetooth 5 with aptX HD

Stereo speakers

FM radio


Android 9 Pie

Samsung OneUI v1.1

The mechanics of the underlying Android 9 Pie operating system are intact. You can opt from several home screen styles, easily access the Quick Settings/notification shade, and control nearly every facet of the theme. (Yes, you can download wallpapers that highlight and/or hide the punch hole.) It’s mostly fluid as you move through the menus. Samsung kept its Edge Screen tool, which acts like a quick-access panel for certain apps and contacts.

Samsung still insists on foisting Bixby on everyone. A dedicated Bixby button appears on the left edge of the phone and consumes the left-most home screen panel. Samsung has refreshed the look of Bixby and I think it’s better, but the voice assistant’s functionality is still not where it needs to be. Samsung added Bixby Routines, which let you combine certain actions in a manner similar to IFTTT and Siri Shortcuts. The good news is that Samsung is finally allowing people to remap the dedicated button to other apps (with the exception of voice assistants.)


Samsung Galaxy S10e: $749.99 (128GB), $849.99 (256GB)

Samsung Galaxy S10: $899.99 (128GB), $1,149.99 (512GB)

Samsung Galaxy S10 Plus: $999.99 (128GB), $1,249.99 (512GB), $1,599.99 (1TB and 12GB of RAM)

And that wraps up our Samsung Galaxy S10 review. Will you buy this phone?

Solving Spotify Multiclass Genre Classification Problem


The music industry has become more popular, and how people listen to music is changing like wildfire. The development of music streaming services has increased the demand for automatic music categorization and recommendation systems. Spotify, one of the world’s leading music streaming sites, has millions of subscribers and a massive song catalog. Yet, for customers to have a personalized music experience, Spotify must recommend tracks that fit their preferences. Spotify uses machine learning algorithms to guide and categorizes music based on the Genre.

This project will focus on the Spotify Multiclass Genre Classification problem, where we download the Dataset from Kaggle.

Goal: This project aims to develop a model that classifies the Genre that can accurately predict the Genre of a music track on spotify.

Learning Objectives

To investigate the link between music genres on Spotify and their acoustic characteristics.

To create a classification model based on auditory characteristics to predict the genre of a given song.

To investigate the distribution of various spotify music genres in the dataset.

To clean and preprocess data in order to prepare it for modeling.

To assess the categorization model’s performance and improve its accuracy.

This article was published as a part of the Data Science Blogathon.

Table of Contents Prerequisites

Before we begin implementation, we must install and import some of the libraries. The libraries listed below are required:

Pandas: A library for data manipulation and analysis.

NumPy: A scientific computing package used for matrix computations.

Matplotlib: A plotting library for the Python programming language.

Seaborn: A data visualization library based on matplotlib.

Sklearn: A machine learning library for building models for classification

TensorFlow: A popular open-source library for building and training deep learning models.

To install these, we run this command.

!pip install pandas !pip install numpy !pip install matplotlib !pip install seaborn !pip install sklearn !pip install tensorflow Project Pipeline

Data Preprocessing: Clean and preprocess the “genres_v2” dataset to prepare it for machine learning.

Feature Engineering: Extract meaningful features from the audio files that will help us train our model.

Model Selection: Evaluate several machine learning algorithms to find the best-performing model.

Model Training: Train the selected model on the preprocessed Dataset and evaluate its performance.

Model Deployment: Deploy the trained model in an online application that can recommend music tracks on Spotify based on the user’s preferences

So, let’s get started doing some code.


First, we need to download the data set. You can download the Dataset from Kaggle. We need to import the necessary libraries to perform our tasks.

import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler from sklearn import preprocessing from sklearn import metrics import numpy as np import tensorflow as tf from tensorflow import keras from sklearn.decomposition import PCA, KernelPCA, TruncatedSVD from sklearn.manifold import Isomap, TSNE, MDS import random from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis import warnings warnings.simplefilter("ignore") Load the Dataset

We load the Dataset using pandas read_csv, and the data set contains 42305 rows and 22 columns and consists of 18000+ tracks.

data = pd.read_csv("Desktop/genres_v2.csv") data Exploring the Data

I use the ‘iloc’ method to select the rows and columns that form a data frame by their integer index positions. I am choosing the first 20 columns of the df.


# this is for the first 20 columns

data.iloc[:,20:]#this is for the 21st column

When you call, it will print the following information:

The number of rows and columns in the data frame.

The name of each column, its data type, and the number of non-null values in that column.

The total number of non-null values in the data frame.

The memory usage of the DataFrame.

data.nunique() # number of unique values in our data set. Data Cleaning

Here, we want to clean our data by removing unnecessary columns that add no value to the prediction.

df = data.drop(["type","type","id","uri","track_href","analysis_url","song_name", "Unnamed: 0","title", "duration_ms", "time_signature"], axis =1) df

We have removed some columns that add no value to this particular problem and put axis = 1, where it drops the columns rather than rows. We are again calling the Data Frame to see the new Data Frame with helpful information.

The df. describe( ) method generates descriptive statistics of a Pandas Data Frame. It provides a summary of the central tendency and dispersion and the shape of the distribution of a dataset.

After running this command, you can see all the descriptive statistics of the Data Frame, like std, mean, median, percentile, min, and max.


To display a summary of a Pandas DataFrame or Series, use the function. It gives Dataset information like the number of rows and columns, the data types of each column, the number of non-null values in each column, and the utilization of the dataset’s memory.

axe = sns.histplot(df[“genre”]) generates a histogram of the distribution of values in a Pandas DataFrame named df’s “genre” column. This code may be used to visualize the frequency of some Spotify genres in a music dataset.

ax = sns.histplot(df["genre"]) _ = plt.xticks(rotation=60) _ = plt.title("Genres")

The following code eliminates or deletes all rows in a Pandas DataFrame where the value in the “genre” column is equal to “Pop”. The DataFrame’s index is then reset to the range where it starts with 0. Lastly, it computes the correlation matrix of the DataFrame’s remaining columns.

This code helps study a dataset by deleting unnecessary rows and finding correlations between the remaining variables.





].index, inplace=



df = df.reset_index(drop =




The following code sns. heatmap (df, cmap=’coolwarm, annot=True) plt. show() generates a heatmap depicting a Pandas DataFrame df’s correlation matrix.

plt.subplots(figsize=(10,9)) sns.heatmap(df.corr(), cmap='coolwarm', annot=True)

The following code picks a subset of columns in a Pandas DataFrame df named x, which contains all columns from the DataFrame’s beginning, including the “tempo” column. Then it chooses the DataFrame’s “genre” as the target variable and assigns it to y.

The x variable represents a Pandas DataFrame with a subset of the original columns, and the y variable represents a Pandas Series with the “genre” column values.

The methods x.unique() and y.unique() retrieve the unique values in the x and y variables, respectively. These routines can be helpful for determining the number of unique values in the variables of a dataset.

x = df.loc[:,:"tempo"] y = df["genre"] x y


I am not giving all the images. You can check the notebook down below.

The given code generates a grid of distribution plots that allow users to view the distribution of values over several columns in a dataset. Discovering patterns, trends, and outliers in the data by showing the distribution of values in each column. These are helpful and beneficial for exploratory data analysis and finding valuable and potential faults or inaccuracies in a dataset.

k=0 plt.figure(figsize = (18,14)) for i in x.columns: plt.subplot(4,4, k + 1) sns.distplot(x[i]) plt.xlabel(i, fontsize=11) k +=1

Here, we are plotting for each x_columns, by using the for loop.

Model Training

The following code divides a dataset into training and testing subsets. It divides the input variables and target variables into 80% training and 20% testing groups at random. The descriptive statistics of the training data are then outputted to aid in data exploration and the identification of possible problems.


Here we are splitting the data into training and testing (size = 20%), and we are using the describe function to see the descriptive statistics.

The MinMaxScaler() function from the sklearn.preprocessing module is used to do feature scaling. It stores the training data’s column names in the variable ol. The scaler object is then used to fit and convert the xtrain data while changing xtest data.

col = xtrain.columns

scalerx = MinMaxScaler()

xtrain = scalerx.fit_transform(xtrain)

xtest = scalerx.transform(xtest)

xtrain = pd.DataFrame(xtrain, columns = col)

xtest = pd.DataFrame(xtest, columns = col)

Here we use the MinMaxScaler, mainly for scaling and normalizing the data.

The following allows us to see the descriptive statistics of the xtrain and xtest.



The LabelEncoder() function from the sklearn.preprocessing package is used to encode labels. It uses the fit transform() and transform() routines to encode the category target variables (ytrain and ytest) into numerical values.

The training and testing data for input (x) and target (y) variables are then concatenated. The numerical labels are then inversely transformed into their original category values (y train, y test, and y org).

Next, we use the np.unique() method, which returns the individual categories in the training data.

Lastly, using the seaborn library generates a heatmap graphic to illustrate the relationship between the input characteristics. This is a critical stage when we examine and prepare data for machine-learning models.

le = preprocessing.LabelEncoder() ytrain = le.fit_transform(ytrain) ytest = le.transform(ytest) x = pd.concat([xtrain, xtest], axis = 0) y = pd.concat([pd.DataFrame(ytrain), pd.DataFrame(ytest)], axis = 0) y_train = le.inverse_transform(ytrain) y_test = le.inverse_transform(ytest) y_org = pd.concat([pd.DataFrame(y_train), pd.DataFrame(y_test)], axis = 0) np.unique(y_train) plt.subplots(figsize=(8,6)) ax = sns.heatmap(xtrain.corr()).set(title = "Correlation between Features")

PCA is a popular dimensionality reduction approach that may assist in decreasing the complexity of large datasets and increasing the performance of machine learning models.

With input data x, the algorithm uses PCA to minimize the number of features to two parts that explain the variation. The reduced Dataset is shown on a 2D scatter plot, with dots colored by class labels in y. This aids in visualizing the dividing of some classes in the reduced feature space.

pca = PCA(n_components=2) x_pca = pca.fit_transform(x, y) plot_pca = plt.scatter(x_pca[:,0], x_pca[:,1], c=y) handles, labels = plot_pca.legend_elements() lg = plt.legend(handles, list(np.unique(y_org)), loc = 'center right', bbox_to_anchor=(1.4, 0.5)) plt.xlabel("PCA 1") plt.ylabel("PCA 2") _ = plt.title("PCA")

t-SNE is a popular nonlinear dimensionality reduction approach that may assist in decreasing the complexity of large datasets and improve the performance of machine learning models.

Using t-Distributed Stochastic Neighbor Embedding (t-SNE) on the input data x reduces the number of features in the high-dimensional space to 2D while maintaining similarity between Data points.

A 2D scatter plot shows the reduced Dataset, with dots colored according to their y-class labels. It helps visualize the division of some classes in the reduced feature space.

tsne = TSNE(n_components=2) x_tsne = tsne.fit_transform(x, y) plot_tsne = plt.scatter(x_tsne[:,0], x_tsne[:,1], c=y) handles, labels = plot_tsne.legend_elements() lg = plt.legend(handles, list(np.unique(y_org)), loc = 'center right', bbox_to_anchor=(1.4, 0.5)) plt.xlabel("T-SNE 1") plt.ylabel("T-SNE 2") _ = plt.title("T-SNE")

SVD is a popular dimensionality reduction approach that may assist in decreasing the complexity of large datasets and increasing the performance of machine learning models.

The following code applies Singular Value Decomposition (SVD) on the input data x with n components=2, reducing the number of input features to two that explain the most variance in the data. The reduced Dataset is then shown on a 2D scatter plot, with the dots colored based on their y-class labels.

This facilitates visualizing the division of multiple classes in the reduced feature space, and the scatter plot is made with the matplotlib tool.

svd = TruncatedSVD(n_components=2) x_svd = svd.fit_transform(x, y) plot_svd = plt.scatter(x_svd[:,0], x_svd[:,1], c=y) handles, labels = plot_svd.legend_elements() lg = plt.legend(handles, list(np.unique(y_org)), loc = 'center right', bbox_to_anchor=(1.4, 0.5)) plt.xlabel("Truncated SVD 1") plt.ylabel("Truncated SVD 2") _ = plt.title("Truncated SVD")

LDA is a popular dimensionality reduction approach that can increase machine learning model performance by decreasing the influence of irrelevant information.

The following code does Linear Discriminant Analysis (LDA) on the input data x with n components=2, which reduces the number of input features to two linear discriminants that maximize the division between the different classes in the data.

The reduced Dataset is then shown on a 2D scatter plot, with the dots colored based on their y-class labels. This aids in visualizing the division of some classes in the reduced feature space.

lda = LinearDiscriminantAnalysis(n_components=2) x_lda = lda.fit_transform(x, y.values.ravel()) plot_lda = plt.scatter(x_lda[:,0], x_lda[:,1], c=y) handles, labels = plot_lda.legend_elements() lg = plt.legend(handles, list(np.unique(y_org)), loc = 'center right', bbox_to_anchor=(1.4, 0.5)) plt.xlabel("LDA 1") plt.ylabel("LDA 2") _ = plt.title("Linear Discriminant Analysis")

The following code substitutes some values in a Data Frame column called ‘genre’ with the new deal ‘Rap.’ Specifically, it replaces the values “Trap Metal,” “Underground Rap,” “Emo,” “RnB,” etc., with “Rap.” This is useful for grouping genres under a single name for analysis or modeling.

df = df.replace("Trap Metal", "Rap") df = df.replace("Underground Rap", "Rap") df = df.replace("Emo", "Rap") df = df.replace("RnB", "Rap") df = df.replace("Hiphop", "Rap") df = df.replace("Dark Trap", "Rap")

The code below generates a histogram using the seaborn library to illustrate the variable “genre” distribution in the input dataset df. The figure has been rotated by 30 degrees to improve the visibility of the x-axis labels. “Genres” is a title.

  plt.subplots(figsize=(8,6)) ax = sns.histplot(df["genre"]) _ = plt.xticks(rotation=30) _ = plt.title('Genres')

The provided code removes the rows from the Data Frame. Specifically, it eliminates rows with a frequency of 0.85 where the genre column value is “Rap,” using a random number generator.

The rows to be discarded are saved in a list of rows dropped before being removed from the Data Frame using the drop function. The code then prints a histogram of the remaining genre values with the seaborn plot function and changes the title and rotation of the x-axis labels with matplotlib’s title and xticks methods.

rows_drop = []  for i in range(len(df)):   if df.iloc[i]['genre'] == 'Rap':     if random.random()<0.85:       rows_drop.append(i)  df.drop(index = rows_drop, inplace=True)  ax = sns.histplot(df["genre"])  _ = plt.xticks(rotation=30)  _ = plt.title("Genres")

The code provided preprocesses the data. The first step is to divide the input data into training and testing sets using the Sklearn library’s train test split function.

It then adjusts the numerical characteristics in the supplied data using the MinMaxScaler function from the same package. The code encodes the category target variable using the preprocessing module’s LabelEncoder function.

As a result, the training and testing sets are preprocessed previously are merged into a single dataset that the machine learning algorithm can process.

x = df.loc[:,:"tempo"] y = df["genre"] xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size= 0.2, random_state=42, shuffle = True) col = xtrain.columns scalerx = MinMaxScaler() xtrain = scalerx.fit_transform(xtrain) xtest = scalerx.transform(xtest) xtrain = pd.DataFrame(xtrain, columns = col) xtest = pd.DataFrame(xtest, columns = col) le = preprocessing.LabelEncoder() ytrain = le.fit_transform(ytrain) ytest = le.transform(ytest) x = pd.concat([xtrain, xtest], axis = 0) y = pd.concat([pd.DataFrame(ytrain), pd.DataFrame(ytest)], axis = 0) y_train = le.inverse_transform(ytrain) y_test = le.inverse_transform(ytest) y_org = pd.concat([pd.DataFrame(y_train), pd.DataFrame(y_test)], axis = 0)

This code creates two early stopping callbacks for model training, one based on validation loss and the other on validation accuracy. Keras’ Sequential API makes a NN model with various connected layers using the ReLU activation function, batch normalization, and dropout regularisation. The summary of the model is printed on the console.

The final output layer outputs class probabilities using the softmax activation function. The summary of the model is printed on the console.

early_stopping1 = keras.callbacks.EarlyStopping(monitor = "val_loss", patience = 10, restore_best_weights = True) early_stopping2 = keras.callbacks.EarlyStopping(monitor = "val_accuracy", patience = 10, restore_best_weights = True) model = keras.Sequential([ keras.layers.Input(name = "input", shape = (xtrain.shape[1])), keras.layers.Dense(256, activation = "relu"), keras.layers.BatchNormalization(), keras.layers.Dropout(0.2), keras.layers.Dense(128, activation = "relu"), keras.layers.Dense(128, activation = "relu"), keras.layers.BatchNormalization(), keras.layers.Dropout(0.2), keras.layers.Dense(64, activation = "relu"), keras.layers.Dense(max(ytrain)+1, activation = "softmax") ]) model.summary()

The following code block uses Keras to compile and train a neural network model. The model is a sequential model with multiple dense layers with relu activation function, batch normalization, and dropout regularisation. “sparse categorical cross entropy” is the loss function utilized. At the same time, “Adam” is the optimizer. The model is trained for 100 epochs, with callbacks that end early based on validation loss and accuracy.

loss = “sparse_categorical_crossentropy”, metrics = [“accuracy”]) model_history =, ytrain, epochs = 100, verbose = 1, batch_size = 128, validation_data = (xtest, ytest), callbacks = [early_stopping1, early_stopping2])

The training data is sent as xtrain and ytrain, whereas the validation data is sent as xtest and ytest. The training history of the model is saved in the model history variable.

print(model.evaluate(xtrain, ytrain)) print(model.evaluate(xtest, ytest))

The following code generates a plot using matplotlib; on the x_axis, we have the epoch, and on the y_axis, we have the sparse Categorical Cross Entropy.

plt.plot(model_history.history["loss"]) plt.plot(model_history.history["val_loss"]) plt.legend(["loss", "validation loss"], loc ="upper right") plt.title("Train and Validation Loss") plt.xlabel("epoch") plt.ylabel("Sparse Categorical Cross Entropy")

Same as above, but here we are plotting between the epoch and the accuracy.

plt.plot(model_history.history["accuracy"]) plt.plot(model_history.history["val_accuracy"]) plt.legend(["accuracy", "validation accuracy"], loc ="upper right") plt.title("Train and Validation Accuracy") plt.xlabel("epoch") plt.ylabel("Accuracy")

The following code ypred, which predicts the xtest.

ypred = model.predict(xtest).argmax(axis=1)

The following code evaluates the classification metrics on the test and ypred, where we can see the precision, recall, and F1score. Based on the values, we can proceed with our model.

cf_matrix = metrics.confusion_matrix(ytest, ypred) _ = sns.heatmap(cf_matrix, fmt=".0f", annot=True) _ = plt.title("Confusion Matrix")

Finally, we will do the model Evaluation.

Model Evaluation

The following code evaluates the classification metrics on the test and ypred, where we can the precision, recall, F1score. Based on the values we can proceed with our model.

print(metrics.classification_report(ytest, ypred)) Conclusion

In conclusion, we could categorize Spotify music genres with an accuracy of 88% using the analysis and modeling done in this study. Given the complexity and subjectivity in defining music genres, this is a reasonable level of accuracy. Yet, there is always an opportunity for improvement, and our analysis has a few limitations.

Another restriction is the likelihood of human mistakes in classifying the data, which might have resulted in genre categorization discrepancies. We may utilize more sophisticated approaches, such as deep learning models, to automatically label music based on auditory attributes to address this.

Our analysis and modeling give a solid foundation for categorizing Spotify music genres, but more study and improvements are required to increase the model’s accuracy and resilience.

Key Takeaways 

Auditory characteristics such as pace, danceability, energy, and valence can be distinguished across Spotify music genres.

Data cleaning and preprocessing are critical processes in preparing data for modeling and can significantly influence model performance.

Early stopping approaches, such as monitoring validation loss and accuracy, can help to prevent model overfitting.

Increase the dataset size, add features, and experiment with alternative methods and hyperparameters to enhance the classification model’s performance.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.


Smart Cities Solving Parking And Driving Challenges

Smart cities and cars are getting even smarter as technological disruption continues to shake up the transportation industry. By 2023, more than 380 million connected cars are expected to be on the road, as automakers plan to connect the majority of the vehicles they sell, according to research from BI Intelligence. This connectivity is leading to new solutions for urban planning applications, data analysis and problem-solving.

Parking Made Easier

Ride-sharing services that drop you at the front door, and smartphone maps that guide to your destination are great, but what happens when you need to park your car? Taking the pain out of parking is just one of the disruptions coming as smart city initiatives find ways to ease the friction inherent in the transportation industry. After all, even self-driving cars will need somewhere to park.

One app developer, SpotHero, launched a developer platform to help garages, parking lots and valet services connect with drivers by integrating parking reservations into their own apps. Much like apps incorporating Google Maps for location and directions for “find a store” functionality, SpotHero does the same thing for parking by connecting drivers with parking operators and taking a commission on each transaction.

Connected Car Device Reduces Rear-end Collisions

White Paper

Driving is about to get a lot safer thanks to new connected car developments such as BrakeActive. Download Now

Other companies are producing apps with different parking angles. Parkopedia gives users real-time visibility into the parking situation at meters, parking lots and even private driveways. Google is also adding predictive parking information to its Android-based Maps app.

Parkmobile provides on-demand and prepaid parking services, including 36 of the top 100 U.S. cities, and also serves airports and major sports and entertainment venues. Users can find parking spaces and pay to park using the Parkmobile app or by calling the toll-free number on the designated parking meter stickers.

Linking Streets and Cars

Making parking easier isn’t the only goal for smart cities. Other initiatives are aimed at making it easier for cars and infrastructure to communicate. Lear, an automotive component supplier, is testing a vehicle connectivity device in Detroit. The device will help cities know where cars are traveling, so the infrastructure can respond by changing streetlights, adjusting traffic patterns and managing roadway maintenance. Real-time response to traffic conditions can make traffic flow more smoothly and reduce traffic emissions.

A pilot smart signals program in Pittsburgh lead by Surtrac reduced travel time by 25 percent and cut idling time by more than 40 percent. In addition to reducing emissions from idling, the program could also lead to lower demand for on-street parking and road expansion. Pittsburgh is also a test area for Uber’s self-driving cars, and the intelligent traffic signals will communicate with the autonomous vehicles for fluid traffic through intersections.

The next step is to equip cars to talk to traffic signals. The Surtrac Pittsburgh test includes short-range radios at 24 intersections around the city. Radio-equipped cars are expected to be on the market soon, and after-market products like Lear’s could be fitted to existing vehicles.

By expanding use cases for connected cars through parking apps and vehicle-to-infrastructure communication, smart cities are setting a new precedent for automakers, civil planners and citizens in urban areas.

Windows Driver Foundation Missing

Windows Driver Foundation Missing [Services Error Fix]




Several users have reported the Windows Driver Foundation missing issue on their PC.

This error can occur because of outdated drivers, missing system files, the essential Windows service not running, etc.



Try Outbyte Driver Updater to resolve driver issues entirely:

This software will simplify the process by both searching and updating your drivers to prevent various malfunctions and enhance your PC stability. Check all your drivers now in 3 easy steps:

Download Outbyte Driver Updater.

Launch it on your PC to find all the problematic drivers.

OutByte Driver Updater has been downloaded by


readers this month.

It can be really frustrating when your Windows PC lags or runs slowly when playing games or doing resource-intensive tasks.

Your PC might have all the processing power to handle those tasks, but if some drivers fail to load, then you will experience multiple issues with your PC.

One such error is the Windows Driver Foundation missing issue. When this error pops up, you should know that some important system-related drivers failed to load on your Windows PC.

Since drivers are one of the most important components that let the hardware communicate with the PC, any issues with drivers can malfunction the hardware process.

There are several users that have reported the Windows Driver Foundation missing error and are looking to resolve this problem.

Besides, if your PC is also throwing up Windows Driver Foundation missing errors, then this will also eat up a lot of resources and ultimately drain your device’s battery.

If you are also experiencing the Windows Driver Foundation issue, and looking for solutions for it, then you are in the right place.

Because in this guide, we will help you with a list of solutions, that have helped users resolve the problem. Let us check it out.

What is Windows Driver Foundation and the reasons for this issue?

Before we jump into the solutions to fix the Windows Driver Foundation missing problem, it is better to understand what it is and the reasons that are triggering the issue.

The Windows Driver Foundation is the former name of the Windows Driver Framework. When some important files go missing, you will see an error message Driver WUDFRd Failed to Load/Missing.

When this error message pops up, it indicates that there are some device drivers on your Windows PC that failed to load properly.

For most users experiencing this issue, the problem occurs after they have updated from an older version of Windows to a new version, let’s say Windows 10 to Windows 11.

Here are a few key reasons you might come across the Windows Driver Foundation missing error.

The device drivers aren’t compatible with the version of your Windows.

A third-party app is conflicting with device drivers.

Presence of corrupt temporary files.

Due to corrupt system files.

Windows Driver Foundation service isn’t running.

Wi-Fi drivers are not updated.

Your copy of Windows is missing some important system files.

Latest Windows updates aren’t installed.

The above are some of the most common reasons why you might experience the Windows Driver Foundation missing error.

Now that you have some knowledge about what the problem is and the reasons that are possibly triggering it, let us check out how you can resolve it.

How can I fix the Windows Driver Foundation missing error? 1. Restart your PC

A simple restart can do wonders and is one of the most common solutions you will hear people suggest when you come across any device-related issues.

Restart allows your PC to take a quick reset and load all the important system files right from scratch, which might have been missed during the previous session.

Before trying anything extreme, we would suggest you restart your PC and see if this fixes the Windows Driver Foundation missing issue.

Press the Win + I keys to open Settings.

Select Windows Update from the left pane.

Hit the Check for updates button.

Your PC will now check the system servers for a new update. If a new update is live, then it will prompt you to install it.

Microsoft pushes new updates that not only bring some new features but also include several bug fixes for the Windows OS.

It is highly recommended that you keep your Windows PC up-to-date so that you do not miss out on new features, the latest security patches, and bug fixes.

3. Run Windows Driver Foundation service

Expert tip:

Using the above-mentioned steps, you can head over to the Windows Services menu, and enable the Windows Driver Foundation service for it to function properly and possibly cure the problem.

Not only it is essential to have the latest version of Windows installed, but you should also keep all the device drivers installed on your PC, up to date.

Outdated drivers may be causing issues because they might not be compatible with the version of Windows OS installed.

While you can manually update the drivers for the devices on your PC, there is another easy way to do this.

Outbyte Driver Updater scans your PC for outdated drivers, shows you the result, and then updates all the drivers. Other features include fixing faulty or broken driver files, updating old device drivers, locating missing drivers, etc.

⇒ Get Outbyte Driver Updater

5. Disable hard drive hibernation 6. Run SFC command

Search for Command Prompt in the Start menu.

Run it as an administrator.

Type in the below command and press Enter. sfc/scannow

The System File Checker, or in short, the SFC scan command, checks your PC for all corrupt files, and if it finds missing or corrupt system files, it automatically repairs them.

Simply reboot your PC once the process is complete and see if this fixes the Windows Driver Foundation missing issue or not.

Alternatively, you can also use a trusted third-party software called Restoro. Using this tool, you can easily resolve the issues triggered because of corrupt system files.

If system-related files get corrupt, your system may not function properly. For such scenarios, you can either go ahead through the process of reinstalling the operating system or repairing it, or else opt for Restoro and see if it resolves your problem or not.

7. Run System Maintenance troubleshooter

Whenever you come across any device-related or driver-related issue, we would suggest you run the Windows Troubleshooter.

This built-in tool comes with all the troubleshooting capabilities and can help you cure several driver-related problems, including the Windows Driver Foundation missing issue.

Let the troubleshooter run, and it will give you a report about what is causing the issue, and it will also prompt you to do the needful to fix the problem.

8. Perform clean boot

Performing a clean boot starts your Windows PC with the minimal set of drivers that are required for the system to boot up.

Using this clean environment, you can easily determine if a background activity is interfering or causing conflicts on your PC with your game or program.

9. Clear the temporary folder 10. Reset Windows

Your PC will undergo the process of reset after you have executed the above steps. Choosing the option to keep your files will reset only the system files. Once the process is complete, you will be able to locate your personal files.

Moreover, resetting your Windows 11 PC should be the last option if none of the above-mentioned solutions helped you solve the Windows Driver Foundation missing error.

Using the above solutions you should be able to fix the problem. We also have a dedicated guide on how you can fix the Windows Driver Frameworks uses too much CPU issue.

Since Windows Driver Foundation is the former name of the Windows Driver Frameworks, you can also use the mentioned solutions in that post to possibly fix the problem.

Was this page helpful?


Start a conversation

Update the detailed information about Manufacturing’s Missing Middle: Solving The Riddle on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!