Skip to main content

What is the main difference between data product managers and data analysts in Internet companies?

 Data product managers and data analysts are currently popular positions in Internet companies. Their responsibilities overlap with those of data analysts. The difference is that this position focuses on the productization of data analysis. 



What is the difference between a data product manager and a product manager in an Internet company?

In large Internet companies, product managers have a variety of subdivided product managers who are biased towards functional interface design. This type of product manager is concerned about the beauty of the product interface. They will use Axure to draw various interface buttons. They have good drawing skills. What they have to do with data is that they need to improve the product interface through data feedback; product managers who are biased towards function realization, this type of product manager is concerned about whether the realization of product functions meets user expectations and whether the efficiency is high enough. Whether the implementation steps are short enough, they need to have a technical background and understand the various implementation logics of the development. They are related to data in that they need to use data feedback to improve the success rate of function implementation, reduce the crash rate and increase the speed of implementation; The first two belong to the front-end product managers, and the back-end product managers are not only the product managers who help each department build the management platform, but the rest is the data product manager.


It can be seen from the above that the product manager has various subdivisions, and the data product manager also has his different requirements and concerns.

This is the job description of the data product manager of a company.

description of job:


Responsible for portal and APP data statistics product development and recommendation algorithm iterations and other related work, independently responsible for the daily iterative work of the product line, and data-oriented responsible for the operational results.

Supervise the core KPI data of all products, and timely output value data to the operation team.

Responsible for data management and operation after the product goes online, continuous monitoring and analysis of relevant data, and regular data analysis and evaluation of its own products, the overall industry, competitors, etc., continuously optimizing products, and completing product life cycle management.

Report the project's core data indicators and project progress, and be responsible for the indicators in the product life cycle.

Responsible for the continuous operation of the product, continuous optimization, improvement, iteration, and in-depth exploration of user needs.


From the above description, we can see that the data product manager position has three concerns: one is the data statistics background; the second is the recommendation system; the third is the monitoring and analysis of product data. Then the requirements for this position should be to be sensitive to data and to understand certain data mining algorithms, so a degree in mathematics or statistics will be helpful.

finally speaking of data mining engineers, in data-related positions, I think data mining and data architecture have the highest barriers and can best reflect them. Data value position. Most companies recruit data mining engineers with a master's degree or above in mathematics, statistics or computer science. Why can't a master's degree be required for a bachelor's degree?


Most companies believe that only 4 years of undergraduate study is not enough to understand the derivation and application scenarios of data mining related algorithms. To do a good job in data mining, in addition to a solid foundation in mathematics and statistics, the implementation of algorithm code is also a very important investigation. local. Why does data mining have such a high threshold? Does it really have such a high value to the enterprise? If you move out its application scenarios, you will know. A certain music company A has been established for many years, and has always been known for its small and clean interface and excellent user experience. Unfortunately, for many years, it has not paid enough attention to music copyright, resulting in frequent loss of users due to not being able to download favorite songs. Later, the company learned a lesson and decided to take a different path, so it hired a team of data mining engineers heavily and built the best recommendation system in the music industry. It quickly restored a large number of users, and now its user share ranks among the top three in the industry.


Yes, the recommendation system can be said to be the most important application scenario for data mining. It originally came from the e-commerce website and what users who browsed the product also browsed, and what users who bought the product also bought, and now it has developed into a variety of The complex feature degree is extracted and the correlation is calculated from each dimension. Many well-known data mining algorithms, such as Naive Bayes, neural networks, logistic regression, etc., require a solid statistical foundation and relevant project experience to be maturely applied to business practice. Data mining is a profession that has emerged with the development of big data technology. In the past, due to technical limitations, many times the training data could only be selected by sampling. As a result, the final prediction probability through the algorithm was only about 60%, while big data The maturity of the software allows engineers to model nearly the full amount of data, resulting in the final prediction probability reaching 80% or even 90%, which can better reflect the value of data mining.

The future development of artificial intelligence, big data, cloud computing and the Internet of Things is worthy of attention. They are all cutting-edge industries. Multi-intelligence era focuses on the introduction and scientific spectrum of artificial intelligence and big data.


Comments

Popular posts from this blog

Defination of the essential properties of operating systems

Define the essential properties of the following types of operating sys-tems:  Batch  Interactive  Time sharing  Real time  Network  Parallel  Distributed  Clustered  Handheld ANSWERS: a. Batch processing:-   Jobs with similar needs are batched together and run through the computer as a group by an operator or automatic job sequencer. Performance is increased by attempting to keep CPU and I/O devices busy at all times through buffering, off-line operation, spooling, and multi-programming. Batch is good for executing large jobs that need little interaction; it can be submitted and picked up later. b. Interactive System:-   This system is composed of many short transactions where the results of the next transaction may be unpredictable. Response time needs to be short (seconds) since the user submits and waits for the result. c. Time sharing:-   This systems uses CPU scheduling and multipro-gramming to provide economical interactive use of a system. The CPU switches rapidl

What is a Fair lock in multithreading?

  Photo by  João Jesus  from  Pexels In Java, there is a class ReentrantLock that is used for implementing Fair lock. This class accepts optional parameter fairness.  When fairness is set to true, the RenentrantLock will give access to the longest waiting thread.  The most popular use of Fair lock is in avoiding thread starvation.  Since longest waiting threads are always given priority in case of contention, no thread can starve.  The downside of Fair lock is the low throughput of the program.  Since low priority or slow threads are getting locks multiple times, it leads to slower execution of a program. The only exception to a Fair lock is tryLock() method of ReentrantLock.  This method does not honor the value of the fairness parameter.

How do clustered systems differ from multiprocessor systems? What is required for two machines belonging to a cluster to cooperate to provide a highly available service?

 How do clustered systems differ from multiprocessor systems? What is required for two machines belonging to a cluster to cooperate to provide a highly available service? Answer: Clustered systems are typically constructed by combining multiple computers into a single system to perform a computational task distributed across the cluster. Multiprocessor systems on the other hand could be a single physical entity comprising of multiple CPUs. A clustered system is less tightly coupled than a multiprocessor system. Clustered systems communicate using messages, while processors in a multiprocessor system could communicate using shared memory. In order for two machines to provide a highly available service, the state on the two machines should be replicated and should be consistently updated. When one of the machines fails, the other could then take‐over the functionality of the failed machine. Some computer systems do not provide a privileged mode of operation in hardware. Is it possible t