Case 1: INDUSTRY
SYSTEM FOR DIRECT MONITORING AND INVESTIGATION OF PRODUCTION
- The customer has a rolling mill, and has problems that a too high percentage of production becomes defective.
- The defect that is most costly is when the shape of the product becomes incorrect, leading to a stop in the process, loss of production and other effects. The process is complex, with many steps and small tolerances.
- A critical part of the production was halfway through the process, and for process engineers it was very interesting to review the production in detail at this point, both historically and predictively.
- A system was built with Python as a base where different predictive models analyzed data generated in the process, including material data, sensor data and the shape of the products.
- Everything was visualized in an interface, where one could follow the path of the product through the whole process.
- Among other things, predictions were made based on sensor data, and sequences of measurement values where a so-called Bidirectional LSTM network with attention mechanism was used to identify which part of the product’s form was critical.
- The system warns when risk of error is detected and points to the actual risk factor for the individual product, even before it becomes incorrect.
- The system’s application was for process engineers who examines how production is going, and what caused the error. In addition, the system provides critical information to operators who can make more well-formed decisions.
- Both the predictive aspect (to predict mistakes in production) as well as the historical aspect (the failure of production looks over time) is very valuable for the intended users.
Case 2: INDUSTRy
ANALYSIS OF SENSOR DATA FROM MANUFACTURING TO REDUCE ERROR RATES AND FACILITATE ROOT CAUSE ANALYSIS
- The customer has a rolling mill, and has problems that a too high percentage of production becomes defective.
- The defect that is most costly is when the shape of the product becomes incorrect, leading to a stop in the process, loss of production and other effects.
- The process is complex, with many steps and small tolerances. They had previously done overview analyzes but failed to identify what causes the errors.
- Focus was to find what causes the errors and also the relationships and patterns that they could not identify before.
- In an initiative to answer why some products are faulty all sensor data were analyzed from the production. A total of about 3500 different parameters were processed.
- The work was conducted in Python with statistical analyzes, machine learning and deep learning models. The insights were delivered to the customer in the form of a report containing suggestions on how to reduce the error rate.
- Technically explained, XGBoost models were used on most of the parameters and a Bidirectional LSTM network with an alert mechanism was used to analyze sequences of measurement values.
- The trained models managed to get an AUC-ROC score of 0.84, together with an accuracy of 88%.
- Among other things, it was discovered that certain characteristic forms of the product increased the risk of errors in the final output. Furthermore, a certain part of the process could be identified as critical. The information served as a basis for process engineers and operators responsible for the production facility.
- This provided guidance for where new sensors would be placed and what new parts of the process would be monitored more carefully.
Case 3: fuel suppliers
CHURN ANALYSIS AND CUSTOMER SEGMENTATION
- The client had about 60,000 corporate customers and was finding it difficult to decide on which customers to focus its marketing activities and offers.
- The client also wanted to know which customers were most likely to stop (churn) in the next six months.
- Before the project got under way, most of the selections were performed manually, which gave poor accuracy, unnecessary marketing activities, and high customer turnover.
- The assignment began by segmenting the customers and examining the Customer Lifetime Value for various customers. This was done so as to obtain a better picture of their customers and their behavioural patterns.
- Furthermore, there was training in a number of models with the help of SPSS Modeler and R to predict which customers were likely to stop.
- An aggregate of the trained models was used to generate a monthly list of the customers on which the marketing department would focus.
- Segmentation helped the client understand its own customers better. Several new insights were generated and old assumptions were shown to be incorrect.
- For instance, it was possible to show differences in customer behaviour depending on whether the purchases were of petrol or diesel, and previous assumptions about customer travel were able to be revised. This formed the basis for their ongoing work with customer analysis.
- The Churn models achieved a precision rate of 89%, which together with lifetime value helped pinpoint which customers were relevant for directed activities and offers.
Case 4: Amusement park
CUSTOMER ANALYTICS EXPERIENCE PLATFORM
The amusement park conducts an extensive strategic move from being an analog amusement park and become a digital amusement park. Today data is in silos and not easily accessible making the amusement park blindfolded and not able to identify customers or even segments. Therefore, it is impossible to make the strategic move towards digitalization and increase their guests customer experience.
Advectas Customer Analytics Experience Platform is powered by Azure. All data, including IoT, is stored in Azure Data Lake ensuring data accessibility and then we apply advanced Machine Learning algorithms to identify customer journeys, behaviors, patterns and offerings with the purpose to increase Customer Experience.
- Through Power BI they will get an insightful way of visualizing the temp of the park. Dashboards, reports and alerts help the amusement park go from insights to necessary actions in order to increase customer experience and satisfaction.
- Using neural gas algorithms and specialized deep learning based on customer and real time data lets the amusement park automatically segment customers and give them recommendations based on behaviors, positioning and attitudes.
Case 5: Consultancy firm
SYSTEM FOR MATCHING THE RIGHT CONSULTANTS TO THE RIGHT ASSIGNMENTS
- The customer is a major consultancy firm with about 5000 consultants and suppliers, and 25,000 project positions. The mission was to find the right consultants for the right assignments, and to determine how many had a given type of competence. This was something that took too long with previous systems.
- In addition, it was difficult to ensure proper competence visibility with previous systems. Many projects were staffed without checking who actually had the best competence, and some projects were even declined since nobody knew who in the company had the necessary expertise to ensure delivery.
- Together with the customer a search and matching solution was created in Python, where project owners could quickly find consultants who matched given search criteria, including people who had written their CV in a language other than Swedish.
- Using advanced text analysis, doc2vec and RNNs as well as other technologies, it was also possible to create an assignment description and find relevant consultants for the assignment.
- The entire solution could be used with a Google-like interface in a portal for assignment matching.
- The data science part of the project played a central role in cutting the time needed to find the right people.
- The language-agnostic part was immensely valuable and helped the project owners find the right people for the right assignments.
- For the consultant managers, this solution was also a major benefit since it help visualise the competences available in the network.
- The solution was regarded as one of the consultancy firm’s most important differentiators compared with the competition.
Case 6: FUEL SUPPLIER
SEGMENTATION AND CUSTOMER VALUE ANALYSIS
- The client had about 60,000 corporate customers and found it difficult to decide on which customers to focus its marketing activities and offers.
- The customer analyses were very basic and were decided in advance. Prior to this project a lot was done manually, which resulted in poor precision, unnecessary marketing activities and high customer turnover.
- Another part that was expensive was the way the discount policy was structured for different customers, and the discount levels were randomly set.
- Together with the customer and a management/strategy consultant with B2B focus, a plan was established for how best to implement a fresh approach to segmentation.
- The customers were divided into different RFM (Recency, Frequency, Monetary Value) levels and into different groups based on CLV (Customer Lifetime Value).
- These groups were analysed from various perspectives so as to identify properties and patterns, and to envisage the customer’s travel patterns over time. This made it possible to confirm a number of previous assumptions and also to contradict other assumptions.
- The new plan and segmentation offered new potential for working more efficiently with the customer stock, and for operating in a more relevant way when customers were contacted.
- Several new customer groups were identified, including ‘infrequent’ customers who came and went and whose purchases were intermittent. Moreover, it was possible to see differences between those who filled up with diesel and those who filled up with petrol or alternative fuels.
- In addition, information was provided for working in a more structured way with discount levels so as to avoid giving unnecessary discount levels in relation to the customer’s estimated value.
Case 7: the national food agency
STRATEGIC ROADMAP FOR BI, DATA SCIENCE AND DATA WAREHOUSE
- The Swedish National Food Agency commissioned us to examine how they could compile, process and analyse data in an efficient way for increased internal usage and to be able to access new knowhow.
- More specifically, how reporting, self-service BI, Data Science and AI solutions could be integrated in the best possible way in their operation. They needed a restructure of their data warehouse, and had recently taken a decision on implementing new investments in BI and Analysis.
- The deliveries included status analyses, collation of requirements and demands, environmental prerequisites and recommendations for analysis environments. Together with the National Food Agency, a series of workshops were conducted where ideas about the operation were identified, listed, concretised and validated.
- By utilising best-practice methods from other projects, a roadmap was created based on a technical, operational, AI and reporting perspective. Together with tool choice and platform choice, this provided an overview of how the National Food Agency could develop its data-driven initiatives.
The project results were delivered in a series of presentations to various representatives of the Agency. The roadmap will be used in discussions with the National Food Agency’s top management and the relevant government agencies, and for determining investments in forthcoming projects.
The project also helped promote understanding of various analysis methods in general, and AI in particular.
- Moreover, it provided further impetus for methods of working with innovation and for ensuring that ideas generated within the organisation were duly noticed and taken on board.