CONTENT 1.1 IMPROVEMENT OPPORTUNITY 1.2 MEASUREMENT 1.3 SCOPE OF PROJECT 2.1 CURRENT

CONTENT 

 

1.1 IMPROVEMENT OPPORTUNITY 

1.2 MEASUREMENT 

1.3 SCOPE OF PROJECT 

2.1 CURRENT PROCESS KEY PERFORMANCE INDICATORS 

2.2 KEY VARIABLES 

2.3 GOALS 

3.0 PROCESS (CREATING A DATA-SET): 

3.1 A. I SOLUTION DESIGN  

3.3 INFRASTRUCTURE AND INTEGRATION  

3.4 PROTOTYPE DEMO  

4.0 RECOMMENDATIONS:  

4.1 IMPLEMENTATION AND DEPLOYMENT  

5.0 MONITORING AND CONTROL:  

6.0 SUMMARY/CONCLUSION  

  

  

  

  

  

  

  

  

COMPANY OVERVIEW 

  

CaseStack is an American company that provides supply chain management (SCM) services. CaseStack’s core competence is their consolidation programs and cross-dock functions for major retailers in the Consumer-packaged goods industry. They operate as a 4PL which involves warehousing, transportation, and supply chain management software (SCMS). CaseStack consolidation programs include services to companies like Walmart, Target and many more. At a minimum of 65 percent, a majority of their offerings include CPG which mostly comprise of dry box food like Romain Noodles or your favorite cereal. CaseStack uses proprietary software as a service platform for its collaborative retailer consolidation programs. CaseStack has been recognized in Food Logistics’ Top Providers. Our goal is to provide a solution to their inventory problems caused mainly by the variation of their inventory. They are dealing with financial losses due to scrapping, mis picks; they need a solution to stop the inventory discrepancies.  

  

EXECUTIVE SUMMARY  

 The supply chain is heavily dependent on having the right inventory in the right place at the right time, this leads to major inventory management problems. Inventory optimization and loss prevention is a huge concern for major companies due to the high cost of inventory. There is a huge financial implication, and these inventory losses can have a significant impact on the bottom line. Companies need to be able to always track their inventory. One of the biggest problems is the physical count of inventory. Most companies perform a minimum of one physical inventory count in a calendar year along with periodical cycle counts usually done in the beginning, middle or end of year. The results of these cycle counts may lead to loss of inventory (mis picks) and write off described as scrap (scrapping) or obsolete products. We have identified that inventory counts are becoming increasingly harder with the introduction of new items. We will use several methods to perform image classification analysis, and this allows us to catalogue the data. After which we will split our data into 3 training models then we will test it before deploying it. We will perform image classification in a supervised learning model to define and identify the images and train a model to recognize them using labelled example photos. We will also perform an object detection machine learning technique that will allow us to identify and locate objects in an image or video. With object detection, we can monitor inventory movements to track their precise locations, all while accurately. 

Example of the Product portfolio 

 

  

 1.1 IMPROVEMENT OPPORTUNITY 

  

Reduce Inventory loss by 10 percent 

Increase productivity by improving the counting process 

Automate inventory count. 

Reallocation of the resources  

Cost of the process improvement is roughly between 50K-150K which out weight the risks. 

 

 

 

 

Warehouse floor  

 

  

1.2 MEASUREMENT  

Cycle time- identifying the current process for efficiency and benchmarking will measure the actual time spent in the process above and would ultimately tell us how much time is being used to count inventory. 

Using through-put analysis to identify variation and trends in our current processes to reduce bottlenecks or redundancies in our process. 

Reduce error related to human error.  

Increase efficiencies  

Missing inventory percentage. 

Average Monthly accuracy is below industry standards 

 

 

1.3 SCOPE OF PROJECT  

 

We will be identifying the areas that have the most inefficiencies and work to implement a process that will reduce inventory count time along with increasing efficiencies by using image classification. Image classification uses supervised learning model along with performing object detection machine learning techniques to locate objects in an image. The entire process of image classification starts with our dataset. We need to provide our AutoML model with a series of examples and instances of our diverse array of products so that it can learn how best to identify and classify images; this takes us back to our dataset. Creating our dataset is the first step we need to take in tackling our inventory business problem which revolves around the cost we incur by not having a product when it is needed or by having too much of a product than can be sold before it expires. We aim to train our model into a high-quality image recognition model. One way we are going to rapidly evaluate the performance of our algorithm is to create a training and testing split of your dataset. A training dataset is used to prepare and train a model. We imagine that the test dataset is brand-fresh and that the algorithm’s output values are hidden. 

 

 2.1 CURRENT PROCESS KEY PERFORMANCE INDICATORS 

 Inventory is currently counted by warehouse associates with a 95 percent accuracy  

First Article inspection to take image of the product and the dimensions 

Risk of over stocking or under stocking scrap cost the company 450k last year 

Cost of carrying inventory- each pallet space costs 38.50 per month. 

First Article Inspection is performed on all new items this include a picture, weight and cube of the item. 

 

2.2 KEY VARIABLES  

Catalog of all the products including dimensions and weight   

Cubic measure of all the item 

 

2.3 GOALS  

Our current process currently requires manpower that we can reassign once implemented   

Throughput will have an inverse relationship with the cycle time  

Reduction of missing units  

Increase in the operation efficiencies  

Higher performance and lower cost  

 

3.0 PROCESS (CREATING A DATA-SET): 

 

 

The entire process of image classification starts with our collection of label data. We need to provide our AutoML model with a series of pictures and instances of our diverse array of products so that it can learn how best to identify and classify images; this takes us back to our dataset. Creating our dataset is the first step we need to take in tackling our inventory business problem which revolves around the cost we incur by not having a product when it is needed or by having too much of a product than can be sold before it expires. We aim to train our model into a high-quality image recognition model.  

The process starts when we Upload training images to the cloud storage using some commands to create buckets; our different categories/classes of products to become our source the training data. After uploading our images, we would create our dataset by creating a CSV file where each row contains a URL to a training image and the associated label for that image. This involves using commands that engineer a way for our AutoML model to access the training dataset. These commands copy the images and updates the csv file with the uploaded images that will eventually be put into the cloud storage bucket (categories). Finally, we Inspect our images by filtering them using labels which prepares our training image dataset.  

 

Train 

This process requires supervised learning approach to train a model. Once we have labelled our dataset, you can train a model with a supervisory signal (the labels), which tells the model the output it should create for each input and, if it doesn’t, you can tweak the model’s parameters so that it will produce the proper result for that input next time. This is referred to as supervised learning (or training). 

Depending on the quantity of files and queued models for training, training might take anywhere from 2 to 8 hours. If your model is taking longer than expected, you may upgrade to a premium plan to be bumped to the head of the queue and have additional computational resources assigned. 

Purpose of training: 

One way to rapidly evaluate the performance of an algorithm on your problem is to create a train and test split of your dataset. A training dataset is used to prepare and train a model. We imagine that the test dataset is brand fresh and that the algorithm’s output values are hidden. 

AutoML Model to train an image: 

We can use AutoML to train our own model using our data. To determine the optimal strategy to train our models, it employs NAS (Neural Architecture Search). The only thing left is to collect data in order to improve the model’s accuracy. 

Steps of training: 

•      Set up the environment for your project. 

•      Save the photos on your computer for training purposes. 

•      Create an image classification system. 

•      Make a file called index.csv and save it to the bucket. 

•      Create a dataset and add index.csv to it. 

•      Educate the model. 

•      Use the model to make predictions. 

•      Use Python and the Restful API to make API requests. 

3.1 PROCESS EVALUATION  

We will be splitting our data set into 3 sets, those being a training set, evaluation set, and validation set. With evaluation we will be evaluating the model’s performance based on what it accomplishes. We will feed the model our training set, and it will learn from the training set. We will then expose it to our evaluation set where we will test hyperparameters. We utilize the evaluation set to double-check our training set. 

Measure the increased efficiency  

Cost benefits over the current process  

Cost of implementation 

Pilot Production.  

Performed in a test environment 

3.2 DEPLOYMENT AND MAINTENANCE 

Once we have completed defined our business requirements and defined our goals, we want to make sure we have addressed the problem and defined a solution. Furthermore, by completing several processes to collect the data and label the data we then tested our dataset against the training data to evaluate the best fit model. We will then move to the deployment of the best fit model into live production. The production environment is when we integrate the best fit model into the exiting production environment. We will start with a pilot program were by we start with part of the warehouse and will continue evaluating our model. It maybe necessary to repeat or perform previous steps in order to fine tune the model. 

Production Deployment Roll out 3 Stages 

Pilot Production partial warehouse area 

Test the model in entire warehouse for two months 

Full Deployment  

Once we have reached our desired output the model is fully deployed.  

 

 

 

 

 

 

 

Work cited. 

 sharma, Nikita. “Strategies for Productionizing Our Machine Learning Models.” Medium, Towards Data Science, 21 June 2020, https://towardsdatascience.com/strategies-for-productionizing-our-machine-learning-models-53399a3199da.  

“Casestack, Inc..” SupplyChainBrain RSS, https://www.supplychainbrain.com/directories/98-supplier-directory/listing/97-casestack-inc.