

Mercari is Japan’s biggest community-powered shopping app whose offices are situated in Japan and the United States. With this, we can expect an e-commerce related dataset based on Japan or US products and customers data.
The files consist of a list of product listings. These files are tab-delimited (.tsv files)
train_id or test_id – ID of the listing
name – Title of the listing. Note that we have cleaned the data to remove text that looks like prices (e.g. $20) to avoid leakage. These removed prices are represented as [rm]
item_condition_id – Condition of the items provided by the seller
category_name – Category of the listing
brand_name – Name of Brand
price – The price that the item was sold for. This is the target variable that you will predict. The unit is USD. This column doesn’t exist in test.tsv since that is what you will predict.
shipping – 1 if shipping fee is paid by seller and 0 by buyer
item_description – Full description of the item. Note that we have cleaned the data to remove text that looks like prices (e.g. $20) to avoid leakage. These removed prices are represented as [rm]
Evaluation Metric: Root Mean Squared Logarithmic Error
Problem Statement:
Mercari wants to offer pricing suggestions to sellers, based solely on descriptions of the products and some additional categorical attributes. Sellers offer a variety of new/used products of different brands.The files consist of a list of product listings. These files are tab-delimited (.tsv files)
train_id or test_id – ID of the listing
name – Title of the listing. Note that we have cleaned the data to remove text that looks like prices (e.g. $20) to avoid leakage. These removed prices are represented as [rm]
category_name – Category of the listing
brand_name – Name of Brand
price – The price that the item was sold for. This is the target variable that you will predict. The unit is USD. This column doesn’t exist in test.tsv since that is what you will predict.
shipping – 1 if shipping fee is paid by seller and 0 by buyer
item_description – Full description of the item. Note that we have cleaned the data to remove text that looks like prices (e.g. $20) to avoid leakage. These removed prices are represented as [rm]
Evaluation Metric: Root Mean Squared Logarithmic Error

Where:
ϵ is the RMSLE value (score)
n is the total number of observations in the (public/private) data set,
p_i is your prediction of price, and
a_i is the actual sale price for i.
log(x) is the natural logarithm of x
Why Log?
As in figure below the price distribution is skewed towards the right, so to maintain a nice variance/spread among price values it is suggestible to predict values using logarithms. Since log(0) is undefined, so we add 1 to all the values.
ϵ is the RMSLE value (score)
n is the total number of observations in the (public/private) data set,
p_i is your prediction of price, and
a_i is the actual sale price for i.
log(x) is the natural logarithm of x
Why Log?
As in figure below the price distribution is skewed towards the right, so to maintain a nice variance/spread among price values it is suggestible to predict values using logarithms. Since log(0) is undefined, so we add 1 to all the values.

Overview:
- Importing Data
- Exploratory Data Analysis
- Exploring Categorical Data
- Exploring Numerical Data
- Feature Extraction, Preprocessing and Imputation
-
- Checking for Duplicate data
- Checking for Null values
- Feature Extraction
- Text preprocessing
- Imputation
- Vectorization and Training Models
- Removing Outliers
- Vectorizing Categorical data
- Vectorizing Numerical data
- Training Models
- Summary
Importing Data
Data Source : https://www.kaggle.com/c/mercari-price-suggestion-challenge/data
There is no duplicate data.
Basic information about Dataset.
Basic information about Dataset.

From the above information Null values are present in 3 features as below:
Category_name has 6327 Null values (0.4% of data is not present)
Brand_name has 632,682 Null values (42% of data is not present)
Item_description has only 4 Null values
Category_name has 6327 Null values (0.4% of data is not present)
Brand_name has 632,682 Null values (42% of data is not present)
Item_description has only 4 Null values
Exploratory Data Analysis
Exploring Categorical Data
category_name:
Observation : Users are more interested in Women apparels,Beauty products and Electronics on this site
Item_conditon_id:
Item_conditon_id:

Observation : Item_condition_id 5 has the least no. of observations/records which covers only 0.1% of total data
shipping:
shipping:

Observation : mostly shipping fee is paid by buyer
brand_name:
There are total 4809 brands, below are top 10 brands
brand_name:
There are total 4809 brands, below are top 10 brands

Since PINK,Nike and Victoria’s Secret are top most brands in the dataset let’s check what kind of products they provide

PINK provides mostly Womens apparels

Nike provides Womens and Kids sports apparels and mostly Shoes

Victoria’s Secret provides Women’s Beauty Products and Apparels
Exploring Numerical Data
Price:


Observation :
Correlations among features
- Considering variety of product, there are more than 80% of products costs less than $50
- Logarithm of prices has better distribution and spread, compared to distribution of prices only.
- Logarithmic based metric like Root mean squared logarithmic Error can be used to predict the accuracy of a model.
Correlations among features



- From the above correlations, It is observed that item condition, shipping both are inversely proportional to price.
- It can be interpreted as item_condition_id increases from 1-5 the price may decrease i.e, 1 is the best condition and 5 is the worst condition.
- Since the observations are based on a variety of products, it is suggestable to find correlation based on the varieties i.e, category names or brand names.
Correlation based on Brands

Correlation based on Categories

- From the above correlations based on brand names and categories It is observed that item condition, shipping both are inversely proportional to price.
- It can be interpreted as item_condition_id increases from 1-5 the price may decrease i.e, 1 is best condition and 5 is worse condition
- shipping increases from 0-1 i.e, if shipping fees are paid by buyer then price gets increased.
Feature Extraction, Text Preprocessing and Imputation
Checking for Duplicate Data:



Observation :
- Dataset contains no duplicates records
- When name, item_description are considered 0.5% data are duplicates
- When name, item_description,price are considered 0.16% data are duplicates
Checking for Null Data:


Observation:
- 5% of records has no item description
- 42% of records has no brand names
- 0.4% of records has no categories
Feature Extraction – summary:
name and item_description are concatenated
Feature Extraction – polarity, num_sentence, num_variety_count, num_value_sum, quantity_value, date_ind:
Here textblob and spacy Libraries are used to extract these features- polarity = polarity score of ‘summary’ using ‘textblob’ library
- num_sentence = number of sentences in summary
- num_variety_count = no. of varieties of numerical data (cardinals, ordinals, money etc.)
- num_value_sum = sum of the values of numerical data
- quantity_value = quantity of product in that summary
- date_ind = check if date is present then 1 else 0 (Date indicator)

The polarity of item_description can determine how good the condition of the product could be and it could be correlated with item_condition_id.

The polarity of an item description is found using ‘Textblob’ library before preprocessing of text data because the text may contain stopwords, special symbols which express the seller viewpoint in a stronger way and we can get True polarity of text.
Observation: Polarity values are high whose item_condition_id is 1 and low whose item_condition_id is 5 i.e, based on description provided by users products with item_condition_id 5 has poor condition.
Observation: Polarity values are high whose item_condition_id is 1 and low whose item_condition_id is 5 i.e, based on description provided by users products with item_condition_id 5 has poor condition.
Feature Extraction : cos_sim
cosine similarity of Title and Item description using spacy Library
Feature Engineering : num_avg
avg of numerical values in summary text.num_avg = num_value_sum / num_variety_count

Feature Engineering : log_num_avg
logarithmic value of ‘num_avg’ featurelog_num_sum = log(num_avg+1)

Imputation – brand_names:
Since Brand_name has the most null values and also Brand can define the price of a product, it is essential to find ways to fill up the empty Brand_names.- One approach is to find the Brand names in ‘item_description’ and ‘name’ features. The values of name and Item_descrption are joined and stored into a new feature named ‘summary’.
- Here I Stored all the brand names in a set and searched for brand names in each ‘summary’, a brand change indicator is also created to check if the brand is provided(0) or extracted(1).
- Sometimes Seller may not put much effort to write appropriate brand name


So we are searching brand names by looking deeper into ‘Summary’ feature.
Here we are removing stopwords and special characters in Brands and summary feature and adding into a set which is used to find brand names in summary feature.
Feature Extraction : brnd_change
It is an indicator if ‘brand_name’ was added during imputation.
Text Preprocessing : brand_name
removing special characters in ‘brand_name’ feature
Text Preprocessing : summary
removing special characters,stopwords in ‘summary’ feature
Imputation – brand_name (deep) :
Users may put brand names in item description without any punctuations, symbols so here we are searching and imputing brand names after preprocessing of brand_name and summary data.Imputation : brnd_change
It is an indicator if ‘brand_name’ was added by imputation process

15000 more Brand names were found and added in brand_name.i.e, we added almost 1% of brand name data
Text Preprocessing : summary
lemmatization of preprocessed ‘summary’ feature
Imputation – category_name :
filling category names based on number of words match with the category_nameeach category is filled based on brand_name
eg: for ‘adidas’ brand
- step 1 : Select records whose brand name is ‘adidas’ and has no category.
- step 2 : create a list of set of category_names of ‘adidas’ from total dataset.
- step 3 : create a set of summary words and iterate over each element obtained from step 2. We are intersecting each element and finding the length of common words between category name and summary.
- step 4 : highest no. of common words among categories in step 3 can have the category name for that record.
- step 5 : If there are no common words then check if the brand name is manually changed or not from ‘brnd_change’. If changed, then change the remove brand name and check its category again among all the products and categories.
- step 6 : In case of ‘nan’ brand_names, all the categories are checked and matched along with summary, to find a suitable category for the summary

- Checking Null categories Brand wise. If the Brand has any null categorical_names, we check and collect which kind of products (categorical_names of the brand)does the particular brand offers as a list of sets.
- Split the summary of the record into a set and perform set intersection with set of categorical_names i.e, finding no. of common words between category and summary. The highest no. of common words between category and summary, the summary will fall under that category of the selected brand.
Feature Extraction : category_change
It is an indicator if ‘category_name’ was added during imputation.
Results after imputing category_name :

Feature Engineering : item_condition_id
converting numerals to strings, since it has only 5 categories in numerical form.
Feature Extraction Summary
- summary = concatenated name and item_description
- polarity = polarity score of ‘summary’ using ‘textblob’ library
- num_sentence = number of sentences in summary
- num_variety_count = no. of varieties of numerical data
- num_value_sum = sum of the values of numerical data
- quantity_value = quantity of product
- date_ind = if date is present then 1 else 0 (Date indicator)
- category_change = if the category is manually added after preprocessing then 1 else 0 (genuine category indicator )
- brnd_change = if the brand is manually added after preprocessing then 1 else 0 (genuine brand indicator )
- cos_sim = cosine similarity of Title and Item description using spacy Library
Feature Engineering / Imputation Summary
- Imputation : brand_name
finding brand names in summary text - Text preprocessing : summary
removing stopwords, special characters and lemmatization of preprocessed summary - Text preprocessing : brand_name
removing stopwords, special characters in brand names - Imputation : brand_name (deep)
finding brand names after preprocessing of brand names and summary text - Feature Engineering : brnd_name,brand_change
regularization of modified brand names, removing irrelevant brand names and resetting the change indicator of brand names while imputing ‘category_name’. - Feature Engineering : num_avg
avg of numerical values in summary text.
num_avg = num_value_sum / num_variety_count - Feature Engineering : log_num_avg
logarithmic value of ‘num_avg’ feature - Feature Engineering : item_condition_id
converting numerals to strings, since it has only 5 categories in numerical form.
Vectorization and Training Models
Removing Outliers
Creating Final dataframe with necessary features
The Outliers may possess two cases:
- If the price is 0 then the seller is selling the item for Free
- If the price is too high then the probability of selling the item will be less.


Observation :
- There are 874 records whose price is 0.i.e, the item is Free.
- 5% of records are eliminated based on 2nd Standard Deviation of Price data. This helps to deal with extreme outlier data like, price=0/higher prices.
Train-Test Split:
75% of data is dedicated to training.
Vectorizing Categorical data
category_name:Here a custom token is created, which creates tokens based on splitting text by ‘/’, for the situations below
eg : 1a. Handmade/Patterns/Accessories has 1 item in dataset
1b. Handmade/Patterns/Baby has 1 item in dataset
There could be a record which may have Accessories/ Baby/ {any other non-frequent category} in category field, but we could predict product prices which are ‘Handmade/Patterns’ even if there is any other non-frequent category.

brand_name:

Item_condition_id:
Since there are only 5 categories and single character values between 1-5, It is vectorized based on character

summary:
Vectorizing summary text using TfidfVectorizer with min_df = 10.

shipping:
Binary category with numerical values[0,1]
x_train[‘shipping’].values.reshape(-1,1)
x_test[‘shipping’].values.reshape(-1,1)
brnd_change:
Binary category with numerical values[0,1]
x_train[‘brnd_change’].values.reshape(-1,1)
x_test[‘brnd_change’].values.reshape(-1,1)
category_change:
Binary category with numerical values[0,1]
x_train[‘category_change’].values.reshape(-1,1)
x_test[‘category_change’].values.reshape(-1,1)
date_ind: Binary category with numerical values[0,1]
x_train[‘date_ind’].values.reshape(-1,1)
x_test[‘date_ind’].values.reshape(-1,1)
Vectorizing Numerical data
All the Numerical data is standardized, below are the details.polarity_scalar : Standardized ‘polarity’ feature
num_sent_scalar : Standardized ‘num_sentence’ feature
num_var_scalar : Standardized ‘num_variety_count’ feature
qty_scalar : Standardized ‘quantity_value’ feature
log_scalar : Standardized ‘log_num_sum’ feature
cos_scalar : Standardized ‘cos_sim’ feature
Evaluation Metric
Custom Evaluation Metric, Root Mean Squared Logarithmic Error (RMSLE) is created as this is not present in scikit-learn’s scoring parameter . Please refer to the link for more details regarding this metric.
Training Models
Hyperparameter Tuning of modelsFunction definitions for GridSearchCV and RandomSearchCV were made which takes training data, model and its parameters as input and returns Data Frame of cross-validation scores.
Function definition for Grid search cross-validation:

Function definition for Random search cross-validation:

Below Models are used for training the data and finding the best Hyperparemeter to get the best score i.e, the least RMSLE
- Linear Regression
- Ridge Regression
- XGBoost Regressor
- XGBoost Random Forest Regressor
- RandomForest
- Wordbatch NN_ReLU_H1
Feature Set 1 :cat_train_bow, brand_train_bow, summ_train_bow,item_cond_train_std, shipp_train_std, polarity_train_std, brnd_change_train_std,cat_change_train_std, num_sent_train_std,num_val_train_std,dti_train_std
Feature Set 2 :cat_train_bow, brand_train_bow, summ_train_bow,item_cond_train_std,cos_train_std, shipp_train_std, polarity_train_std, brnd_change_train_std,cat_change_train_std, num_sent_train_std,num_var_train_std,num_val_train_std,qty_train_std,dti_train_std,log_train_std
Feature Set 3 :cat_train_bow, brand_train_bow, summ_train_bow,item_cond_train_bow,cos_train_std, x_train[‘shipping’].values.reshape(-1,1), polarity_train_std, x_train[‘brnd_change’].values.reshape(-1,1), x_train[‘category_change’].values.reshape(-1,1),num_sent_train_std,num_var_train_std, qty_train_std,x_train[‘date_ind’].values.reshape(-1,1),log_train_std
Below are the Performances of each model, Click on Tab of each model
Linear Regression
Training Description | Best Parameters | RMSLE |
---|---|---|
Feature Set 1 | – | 0.75897 |
Feature Set 2 | – | 0.68443 |
Feature Set 3 | – | 0.60311 |
Feature Set 3 95% data |
– | 0.43815 |
Observation : WordBatch Model has the best Root Mean Squared Logarithmic Error of 0.39593
Future work
- Add Word2Vec features, Word2Vec features can be obtained from gensim, Spacy LIbraries and train data.
- Train data using Attention Model and check results.
- Implement Coefficient of variation to get more details and better understand the spread of data.
- Use Coefficient of determination to check in depth dependencies of dependent variable with independent variable.
Conclusion
Mercari provided sellers data with an aim to find a solution to suggest an appropriate price for sellers based on product. My aim is to focus more on retrieving information of data and using various feature engineering techniques based on the information retrieved which can help to get a good RMSLE score.The lesson I learnt is take more time to look into raw data and imagine yourself into that problem, so that it will be easier to make assumptions, analogies, situations etc.
Sources
https://www.appliedaicourse.com/https://towardsdatascience.com/predict-product-success-using-nlp-models-b3e87295d97
https://towardsdatascience.com/build-and-compare-3-models-nlp-sentiment-prediction-67320979de61
https://www.youtube.com/watch?v=QFR0IHbzA30
https://viblo.asia/p/predict-independent-values-with-text-data-using-linear-regression-aWj5314eZ6m
August 1, 2020 @ 7:13 pm
Please let me know if you’re looking for a article writer for your weblog.
You have some really good posts and I believe I would be a
good asset. If you ever want to take some of the load off, I’d absolutely love to write some material
for your blog in exchange for a link back to mine.
Please blast me an email if interested. Thanks!
my site: Royal CBD
December 10, 2020 @ 7:42 am
Its not my first time to pay a visit this web page, i am browsing this web site dailly and obtain fastidious information from here everyday. Kris Salim Buseck
January 17, 2021 @ 8:54 pm
Thanks for sharing, this is a fantastic blog article. Really thank you! Really Great. Faith Malvin Pooley
January 18, 2021 @ 12:51 am
I cannot thank you enough for the post. Much thanks again. Kriste Cly Horvitz
January 18, 2021 @ 6:46 pm
You are fucking hilarious. That is what does make sense. Kirsti Farrel Roth
January 21, 2021 @ 7:42 pm
Pretty! This was an incredibly wonderful article. Thanks for providing this information. Marilyn Jonah Gamages
February 27, 2021 @ 2:53 pm
Thanks Maridel Rowan Fredek
January 31, 2021 @ 9:54 am
This is a test comment to show the design of the comments section Dianne Ezri Enrichetta
January 31, 2021 @ 4:44 pm
I got this web page from my pal who told me about this web site and at the moment this time I am browsing this site and reading very informative content here. Merry Gunther Puff
January 31, 2021 @ 8:28 pm
If some one wants to be updated with hottest technologies then he must be go to see this website and be up to date all the time. Fleurette Ettore Thetos
February 1, 2021 @ 9:03 am
I think this is a real great article post. Really thank you! Much obliged. Fianna Chaunce Dysart
February 4, 2021 @ 12:43 pm
Hi there! This is my 1st comment here so I just wanted to give a quick shout out and say I really enjoy reading your posts. Can you recommend any other blogs/websites/forums that deal with the same topics? Many thanks! Mab Domingo Fachini
February 7, 2021 @ 6:47 pm
Hi! This is my first visit to your blog! We are a collection of volunteers and starting a new initiative in a community in the same niche. Your blog provided us valuable information to work on. You have done a outstanding job! Ailsun Silvio Oscar
February 27, 2021 @ 3:12 pm
Thanks Ailsun Silvio Oscar
February 8, 2021 @ 10:28 am
I have read so many content on the topic of the blogger lovers but this post is really a good article, keep it up. Kathi Carrol Clarkson
February 10, 2021 @ 9:52 am
Thank you for this pin! I love historical fiction and have read it for 65 years. Anya Seton was my favorite author. Thanks to you, I now have my eye on new titles to look for! Carey Eben Cassiani
February 11, 2021 @ 5:50 am
you are really a just right webmaster. The website loading speed
is incredible. It sort of feels that you are doing
any distinctive trick. Also, The contents are masterwork.
you’ve done a excellent job in this subject!
Have a look at my site … CBD oil for anxiety
February 11, 2021 @ 10:47 pm
Valuable information. Lucky me I found your site unintentionally, and I am surprised why this accident
didn’t took place in advance! I bookmarked it.
February 13, 2021 @ 1:40 pm
I would like to add that if you do not actually have an insurance policy or maybe you do not belong to any group insurance, you might well benefit from seeking aid from a health agent. Self-employed or those that have medical conditions ordinarily seek the help of the health insurance agent. Thanks for your writing. Christian Peadar Dolhenty
February 13, 2021 @ 3:55 pm
I love it when people get together and share views. Great blog, keep it up! Madel Brantley Simmons
February 13, 2021 @ 5:44 pm
My family would say it is not a look of pride but of smugness! Yes, try one but I would not go with more than a 500 to 750 piece one first. I prefer the Ravenberger brand because pieces are more unique and fit better,,,but more expensive. Nollie Sax Caesaria
February 13, 2021 @ 7:28 pm
This is my first time go to see at here and i am really happy to read everthing at one place. Milzie Rosco Roarke
February 27, 2021 @ 3:09 pm
Thanks Milzie Rosco Roarke
February 13, 2021 @ 8:47 pm
Do you mind if I quote a couple of your posts as long as I provide credit and sources back to your webpage? My blog is in the exact same niche as yours and my users would truly benefit from some of the information you provide here. Please let me know if this ok with you. Appreciate it!| Liza Brose Sansone
February 27, 2021 @ 3:04 pm
Good to know you liked the blog that you cared for credits
You can educate people from any of my blogs 🙂
Thanks Liza Brose Sansone
February 13, 2021 @ 10:26 pm
Very good article! We are linking to this particularly great article on our site. Keep up the good writing. Mair Pooh Brag
February 13, 2021 @ 11:39 pm
I love looking through an article that can make men and women think. Also, thanks for allowing me to comment! Ailyn Collin Gibrian
February 27, 2021 @ 3:01 pm
That’s called Curiousity, Which is my goal to my blog readers
Thanks you made my day
February 14, 2021 @ 1:46 am
Hello there. I found your website via Google even as searching for a comparable subject, your website got here up. It looks great. I have bookmarked it in my google bookmarks to come back then. Pippa Wash Linnet
February 14, 2021 @ 5:32 am
My brother suggested I might like this website. He was entirely right. Albertine Barr Lissa
February 14, 2021 @ 12:58 pm
I love it whenever people get together and share thoughts. Great website, continue the good work! Eilis Billie Keith
February 14, 2021 @ 2:40 pm
After study a number of the content on your own web site now, and that i genuinely much like your way of blogging. I bookmarked it to my bookmark web site list and you will be checking back soon. Pls check out my site too and inform me what you think. Tedi Bennett Rolfe
February 27, 2021 @ 2:56 pm
Thanks Tedi Bennett Rolfe
February 14, 2021 @ 4:02 pm
I for all time emailed this webpage post page to all my contacts, for the reason that if like to read it then my contacts will too. Jami Keefer Daye
February 14, 2021 @ 7:12 pm
Stunning story there. What happened after? Thanks! Zelma Pietrek Snider
February 15, 2021 @ 8:33 am
Hey, thanks for the blog article. Thanks Again. Fantastic. Audy Anders Morrie
February 15, 2021 @ 9:48 am
Definitely believe that which you said. Your favorite justification appeared to be on the net the easiest thing to be aware of. I say to you, I certainly get irked while people consider worries that they just do not know about. You managed to hit the nail upon the top and also defined out the whole thing without having side effect , people can take a signal. Will probably be back to get more. Thanks Celestyn Brody Gottfried
February 15, 2021 @ 11:13 am
Good post. I will be experiencing many of these issues as well.. Eleen Abbey Michelle
February 15, 2021 @ 2:26 pm
Very nice blog post. I absolutely love this website. Continue the good work! Anna-Diane Willie Yeo
February 27, 2021 @ 2:52 pm
Thanks Anna-Diane Willie Yeo, Hope you will like future blogs
February 15, 2021 @ 4:20 pm
You have made some really good points there. I looked on the web for additional information about the issue and found most people will go along with your views on this site. Vally Hayyim Idalia
February 15, 2021 @ 5:45 pm
Wooh friend I agree with you it is incredible that anyone can acquire it Helen Gardner Tempa
February 15, 2021 @ 9:46 pm
Excellent items from you, man. I have be aware your stuff previous to and you are simply too
excellent. I really like what you have acquired here, really like what you are saying and the best way in which you say it.
You are making it entertaining and you still care for to stay it sensible.
I cant wait to read far more from you. This is
actually a tremendous site.
February 16, 2021 @ 7:24 pm
This paragraph is genuinely a pleasant one it helps new internet visitors, who are wishing in favor of blogging.| Stacey Bobbie Colon
February 17, 2021 @ 9:17 am
I intended to compose you one very little observation in order to thank you yet again for all the beautiful basics you have provided above. This has been certainly generous with people like you to supply extensively what numerous people could have offered for an e-book to earn some bucks for themselves, particularly since you could have tried it if you ever desired. Those suggestions additionally served as the fantastic way to fully grasp that other people have a similar zeal much like my own to know the truth many more pertaining to this matter. I know there are numerous more fun sessions ahead for many who check out your blog. Idelle Vergil Vada
February 27, 2021 @ 2:50 pm
I am glad it helped, I hope my upcoming blogs will help you even more
February 17, 2021 @ 12:11 pm
Great post! We are linking to this great content on our site. Keep up the great writing. Marissa Mordecai Gujral
February 27, 2021 @ 2:49 pm
Thank You Marissa Mordecai Gujral
February 17, 2021 @ 6:17 pm
I get a lot of questions on Twitter about my experience at Buffer and how it feels to work remotely. Most of the questions are about time management, work and day structure, and the tools that I use daily to stay productive. Wylma Kele Berny
February 21, 2021 @ 4:05 pm
Everything is very open with a very clear clarification of the
issues. It was truly informative. Your website is useful.
Thanks for sharing!
February 27, 2021 @ 2:47 pm
Glad it Helped
February 21, 2021 @ 6:48 pm
This is my first time go to see at here and i am truly pleassant to read everthing at single place.
February 22, 2021 @ 9:19 pm
levis superlow jeans amateur up dresses oklahoma t shirt ideas
velvet with crystal dior shoes medieval polish dress <a href=”http://www.452href.com”> punk muslin shirt </a>
classic simple wedding dresses foot joys fit dogs golf shoes http://www.udwcs1223.com skull yoga pants
February 28, 2021 @ 1:16 am
I know this if off topic but I’m looking into starting
my own blog and was curious what all is needed to get set up?
I’m assuming having a blog like yours would cost a pretty penny?
I’m not very internet savvy so I’m not 100% sure.
Any suggestions or advice would be greatly appreciated.
Kudos
Here is my web blog … best CBD for dogs
March 1, 2021 @ 5:44 am
Appreciate the recommendation. Let me try it out.
March 1, 2021 @ 6:30 am
Howdy superb website! Does running a blog such as this require a massive
amount work? I have very little understanding of programming but I was hoping to start my own blog in the near
future. Anyways, should you have any suggestions or techniques for
new blog owners please share. I know this is off subject nevertheless I simply had to ask.
Thanks!
March 1, 2021 @ 12:05 pm
I was suggested this website by means of my cousin. I am
no longer certain whether or not this submit is written by him as nobody else realize such specific about my
problem. You’re incredible! Thank you!
Feel free to surf to my website; cbd oil for sleep
March 2, 2021 @ 12:18 pm
Spot on with this write-up, I honestly believe this web site needs much more attention. I’ll probably be
back again to see more, thanks for the info!