Plotly Analytics

Plotly Analytics

Business Consulting and Services

Pune, Maharashtra 15 followers

Education and consulting services in Data Analytics using R Programming language

About us

Businesses use data as their single biggest assets to remain competitive. This has lead to demand for data science talent. Supply, meanwhile remain too scarce to meet that demand. This has led to Data Science and Machine Learning being open up to new roles. Thus, data scientists come from varied backgrounds such as Engineering, Business Administration, Economics, Social and Medical Science to name a few. Plotly Analytics is started with a mission to help professionals like you who have not come from tech/programming background. We make easier to learn data science by focusing on most relevant topics and providing in-depth learning materials that you can understand regardless of your professional or educational background. Our learning resources are available as Blog Posts (company web-site), Youtube videos and E-books. Nearly everything we teach is 100% FREE, with exception of some of the resources. We use R Programming Language in all our learning resources and consultancy services. We also help SMEs leverage their data by providing corporate training and consultancy services. Some of the services we offer include: - Exploratory Data Analysis - Predictive Analysis (forecasting) - Corporate Training - Data Visualization and report generation using state of the art scientific publication software such as LATEX, Rmarkdwon and R Shiny.

Industry
Business Consulting and Services
Company size
2-10 employees
Headquarters
Pune, Maharashtra
Type
Self-Employed
Founded
2024
Specialties
R Programming, Exploratory Data Analysis, Predictive Analytics, Data Visualisation, and Project Management

Locations

Updates

  • View organization page for Plotly Analytics, graphic

    15 followers

    Data manipulation is the process of changing or transforming data to make it more organized, readable, and useful. It’s used to extract insights, perform calculations, and prepare data for analysis and presentation. In this fourth article on why should you use R , we will compare the capabilities of R and Excel using various data manipulation tasks. If you wish to read my previous articles here are the links: ▶ R Vs Excel - What is a difference: https://lnkd.in/dcxTr5mS ▶ Why Should I use R: The Excel R Data Exploration comparison: https://lnkd.in/dtjBbGK2 ▶ Why should I use R: The Excel R Data Visualization comparison : https://lnkd.in/dYWuCzbX 📗 Rename the columns in R and Excel Excel: ✔ Renaming column in Excel is completely manual process which involves double clicking on the column heading you want to rename, typing out new column name and pressing enter. R: ✔ In R, renaming column is super easy. There are no point and clicks involved and rename function (from dplyr package), because of its intuitive syntax, clearly shows which columns are renamed and what are there new names. 📗 Arrange the column in ascending order Excel: ✔ In excel arranging the column in ascending order is completely manual process. With Excel due to its point-and-click nature, it is impossible to identify, by looking at a column, how the data was modified. R: ✔ In R, we will use the arrange() function to sort the data in ascending order. It clearly shows which column is modified and what is a modification. 📗 Selecting and Adding new column Excel: ✔ If we want to create a separate data table by selecting a handful of the columns from main data set, we have to copy paste those columns into new tab. Also creating new column requires writing a formula which is hidden inside a cell, therefor difficult to spot check. R: ✔In R, we will use select( ) function to select the columns we need for the analysis, and then use mutate( ) function to create new column. This code will be completely reproducible, since after reading the script, user would clearly know which columns were selected for analysis and how the new column was added to the data. 📗 Remove Column Excel ✔ In Excel, inserting or deleting a column is a manual process. R: ✔ In R, we use select function to select the column we want to remove making it easier to validate that a right column is been removed from data You can read full article here: https://lnkd.in/dh5Z_MA2

    • No alternative text description for this image
  • View organization page for Plotly Analytics, graphic

    15 followers

    This is a third article on ongoing series on why you should use R. ▶ You can read first article here: https://lnkd.in/dcxTr5mS ▶ You can read second article here: https://lnkd.in/dtjBbGK2 In this article, we will compare the capabilities of R and Excel using various data visualisation tasks. Some of the reasons below will encourage you to make the switch from Excel to R. ✅ Reproducibility ✔ Can you view the code used to generate the Excel graph? If you create a scatter plot, will you be able to tell exactly whats going on? Can you able to tell which data range is used on X and Y axis, without pointing and clicking on the scatter plot? ✔ With R these things are possible. You automatically have all the code visible in the form of scripts. Reading and understanding the code is possible because of its easy to read syntax, which allows you to track what the code is doing without having to be concerned about any hidden functions or modifications happening in the background. ✅ Track the changes ✔ In Excel, it is very difficult to track which modifications have been made to a graph. Has the data range changed? Are there any additions to the plot, such as new data points? Has the axis range changed? ✔ In R, due to its complete code visibility, you can track all the changes and see which part of a chart is modified. ✅ Automation ✔ In Excel, user would usually draw a graph on a single Excel document, and if the same graph is required on a different data set, it is common to copy and paste the entire sequence of steps on the separate document. ✔ With R we can avoid this frustration by creating functions, which can be used to create same type of graph on different data sets. ✅ Flexibility ✔ In Excel, you are limited to the charts that are available in the current version of Excel. ✔ In R, you can build your own custom made visualizations using various packages. This gives R users more flexibility to create, modify and publish the customised visualizations. You can read full article here: https://lnkd.in/dYWuCzbX

    • No alternative text description for this image
  • View organization page for Plotly Analytics, graphic

    15 followers

    This is a second article on ongoing series on why you should use R. You can refer to the first article by following the link: https://lnkd.in/dwEeqzZp In this article, we will compare the capabilities of R and Excel using various data exploration tasks. 📁 Loading the Data set Excel: ✔ Go to File > Open and browse to the location that contains the text file. ✔ Locate and double-click the excel file that you want to open. R: ✔ Importing excel file in R is super easy. You have to use read_excel function from readxl package. # import the excel file in R gapminder_data <- read_excel("C:\\Blogs\\R and Excel differences\\gapminder_data.xlsx") 📁 Exploring the Data set Excel ✔ Excel has one basic data structure, which is the cell. These Excel cells are extremely flexible as they store data of various types (numeric, logical and characters). ✔ In Excel, you can view the data in the form of row and column table. ✔ This method of viewing a data may become messy and time consuming if the data table has millions of rows and thousands of columns. R: In R, exploring data feels like a breeze. You have several functions at your disposal that can be effectively used to explore data in multiple forms. ✔ We can use head() function to check first 6 rows of the data head(gapminder_data) ✔ We can use tail() function to check last 6 rows of the data tail(gapminder_data) ✔ We can get more elaborate view of the data using str() function. This function shows the number of rows and columns in the data along with the data type of each variable and some of its initial values. str(gapminder_data) 📁 Explore the data using summary statistics Excel: To generate summary statistics (such as the minimum and maximum values) of our data in Excel, we followed a few steps: 👉 select the cell next to the numbers you want to sum 👉 select AutoSum on the Home tab 👉 click on min, max, average or any other function you want and press enter These steps are very easy to follow. However, if you refer to the analysis six months later, it is often difficult to recall how did you arrive to the answer or where had you clicked or which tab had you selected? R: In R, the approach is quite different. You write R code in the notepad like document, called as scripts. These scripts are written in the form of linear progression of the analysis steps you had taken to get to the answer. Thus every little step in the analysis is documented in the form of script. 👉 Using summary() function, we can generate summary statistics of a entire data set. You can read full article here: https://lnkd.in/dtjBbGK2

    • No alternative text description for this image
  • View organization page for Plotly Analytics, graphic

    15 followers

    A data science roadmap is used to establish a process of solving data science problem. At very high level, the data science roadmap has following stages: ✔ Frame the Problem ✔ Understand the data ✔ Extract features ✔ Model the data ✔ Present results ✔ Deploy code Let's look at each stage briefly: ❓ Frame the Problem: The difference between success and failure on data science project is not about math or engineering: it is about asking the right questions. Most data science projects starts with some kind of extremely open-ended question. Sometimes questions are known in the form of pain points but it is not clear what solution would look like. Before delving into actual work, it’s important to clarify exactly what would constitute a solution to this problem. A “definition of done” is a good way to document the criterion for completed project. 📗 Understand the Data: Once you have access to the data, it is good idea to ask standard set of questions that will quickly give you some feel of the data. The questions can include: 👉 How big is the dataset? 👉Is this entire dataset? 👉Is this data representative enough? Are all edge cases are considered? 👉Are there any outliers? For e.g. sudden peak in product sale due to promotional campaigns. 👉Are there any missing values in data? Where these blank data (missing values) come from. The most important question to ask about the data is whether it can solve the business problem that you are trying to tackle. 📱 Extract Features Features define internal structures of a data set. In practical terms, feature extraction means taking your raw data set and distilling them into table of rows and columns. Feature extraction phase may involve data scientists working closely with the domain experts to understand the features and their relevance for the problem in hand. 🎰 Model Once features were extracted, most data science projects involve some kind of machine learning model. This stage is relatively simple, because you just take a standard suite of models, plug your data into each one of them, and see which one works best. Once the model is selected, a lot of time goes into tuning the model to increase its accuracy. 🎁 Present Results This stage involves preparing slide deck or a written report describing the work you did and what your results were. Often communicating the results is difficult, because material you are communicating is highly technical and you are presenting to a broad audience. 👨💻 Deploy Code Typically this falls into two categories: 👉 Batch analytics code: This involves doing analytics similar to the one that has already been done, on the data that will be collected in future. 👉 Real-time code: This will be full fledged development of analytical package, written in high performance programming language and adhering to all the best practices of software engineering. Once the code in deployed in production, it has to be maintained and monitored for the performance.

    • No alternative text description for this image
  • View organization page for Plotly Analytics, graphic

    15 followers

    Probably the most widely known and used of all distributions is Normal distribution. In the real world, many human characteristics such as height, weight, IQ score etc. have relative frequency curves that are closely approximated by normal distribution. Many variables in business and industry are also normally distributed. Some examples such as annual returns from a stock, the cost per square foot of renting warehouse space, items produced or filled by machines are normally distributed. Normal distribution is an integral part of statistical process control. When large enough samples are taken (approx. more than 30), many statistics are normally distributed regardless of the shape of the underlying distribution from which they are drawn. Characteristics of Normal distribution ✔ it is continuous distribution ✔ It is symmetrical about mean ✔ Its area under the curve is 1 ✔ It is asymptotic to horizontal axis ✔ It is uni-modal ✔ It is a family of curves The normal distribution is described by two parameters: the mean, (mu), and standard deviation, (sigma). The values of mu, and sigma produce a normal distribution You can learn more on Normal Distribution here: https://lnkd.in/dpngTC-B In this article, you will learn: ✔ Get intuitive understanding of Normal distribution using real life business example (Calculate the probability of sales page load time of a company web-page) ✔ Get reproducible R code to calculate and visualize normal distribution ✔ Learn definition of standard normal distribution (z-score) and its calculation in R

    • No alternative text description for this image
  • View organization page for Plotly Analytics, graphic

    15 followers

    This is the second article of two part article series on handling date and time data in R. You can read first article here: https://lnkd.in/dMuRxRss When date and time data are imported into R they will often default to a character string. This requires us to convert strings to dates. We may also have multiple strings that we want to merge to create a date variable. The time series data comes in various formats, so we need to first get it transformed into a unified structure to carry out further analysis. In this article we will focus on various different ways by which you can manipulate the time series data. We will learn: ✔ Converting strings to date using various formatting options ✔ Extracting date and time component from the date-time object ✔ Performing arithmetic operations on date-time objects You can read the article here: https://lnkd.in/dpSxBQXJ The R code provided in the article is reproducible, allowing you to copy, practice, and incorporate it into your project.

    • No alternative text description for this image
  • View organization page for Plotly Analytics, graphic

    15 followers

    A time series is a list of observations ordered successively over time. In a time series, observations are often recorded at evenly-spaced intervals, such as daily, weekly, monthly, and so on. 👉 This is a first article of two part article series on handling date and time data in R. In this article we will look at how date and time objects are stored in R. 👉 In the subsequent part, we will discuss various functions used for data extraction, and manipulation of date and time objects. ➡ In this article, we will learn to handle date and time in R. ➡ We will look at how R handles date/time objects internally and learn about different classes such as POSIXct and POSIXlt designed to effectively handle date time objects. You can read the article here: https://lnkd.in/dMuRxRss The R code provided in the article is reproducible, allowing you to copy, practice, and incorporate it into your project.

    • No alternative text description for this image
  • View organization page for Plotly Analytics, graphic

    15 followers

    String manipulation refers to the process of modifying, analyzing, or transforming strings, into useful form for analysis. R has powerful repertoire of functions for string manipulation. These functions are used in Pattern Matching. Pattern matching in strings involves checking a sequence of characters (the pattern) within another sequence of characters (the text). This is a crucial concept in data science, used in various applications like text processing, data mining, and information retrieval. In this article we will look at some of the most frequently used functions for string manipulation. ✅ tolower and toupper functions to change the character case ✅ nchar function for counting characters ✅ Function to split texts ✅ Functions used in pattern matching String manipulation is a fundamental aspect of programming and is used in various applications, such as data processing, text analysis, and user input handling. You can read the entire article here: https://lnkd.in/gjBerdGV The R code provided in the article is reproducible, allowing you to copy, practice, and incorporate it into your project.

    • No alternative text description for this image
  • View organization page for Plotly Analytics, graphic

    15 followers

    Character vectors in R are used to store text data. R also provides a variety of built-in functions to deal with character vectors. Perhaps the most basic thing we can do with texts is to view them. R provides several ways to view texts in the console. In this article, we will look at some handy functions that are used to concatenate (join) and print text output. We will learn: ✔ Basic print function ✔ concatenate string using cat function ✔ concatenate string using paste function ✔ The most flexible sprintf function, which is derived from C language print function: 👉 print function is used to explicitly print an object in console cat function: 👉 cat function is used to combine multiple strings paste function: 👉 paste function is also used to combine multiple strings. Although both functions concatenate strings, the difference is that cat( ) only prints the string to the console, but paste( ) returns the string for further uses. sprintf function: 👉sprintf function for precise control of the output You can read the article here: https://lnkd.in/dgWZh53y All the R code examples are reproducible, allowing you to use them into your projects or practice to enhance your skills

    • No alternative text description for this image
  • View organization page for Plotly Analytics, graphic

    15 followers

    In data analysis, Heatmap is powerful data visualization tool to understand the correlations between variables. Darker colors usually indicate higher correlation values, while lighter colors indicate lower or no correlation. Heatmaps can be a powerful tool in your data analysis toolkit. Here are some ways you can use them: ✅ Identifying Patterns: Heatmaps make it easy to spot patterns and trends in your data. For example, you can quickly see which variables are highly correlated. ✅Highlighting Outliers: Outliers or anomalies in your data stand out clearly in a heatmap, allowing you to investigate further. ✅Simplifying Complex Data: When dealing with large datasets, heatmaps can simplify the data by providing a visual summary, making it easier to understand and interpret. ✅Enhancing Presentations: Heatmaps are visually appealing and can make your data presentations more engaging and easier to understand for your audience. In this article you will learn to create Heat maps. in particular, you will learn ✅ What is correlation matrix ✅ What are heatmaps ✅ How to create heatmaps in R ✅ Different representations of heatmaps You can read article here: https://lnkd.in/d3rfbmGQ

    • No alternative text description for this image

Similar pages