{"id":79011,"date":"2021-12-02T17:18:04","date_gmt":"2021-12-02T17:18:04","guid":{"rendered":"https:\/\/papersspot.com\/blog\/2021\/12\/02\/content-1-1-improvement-opportunity-1-2-measurement-1-3-scope-of-project-2-1-current\/"},"modified":"2021-12-02T17:18:04","modified_gmt":"2021-12-02T17:18:04","slug":"content-1-1-improvement-opportunity-1-2-measurement-1-3-scope-of-project-2-1-current","status":"publish","type":"post","link":"https:\/\/papersspot.com\/blog\/2021\/12\/02\/content-1-1-improvement-opportunity-1-2-measurement-1-3-scope-of-project-2-1-current\/","title":{"rendered":"CONTENT 1.1 IMPROVEMENT OPPORTUNITY 1.2 MEASUREMENT 1.3 SCOPE OF PROJECT 2.1 CURRENT"},"content":{"rendered":"<p>CONTENT\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> 1.1 IMPROVEMENT OPPORTUNITY\u00a0<\/p>\n<p> 1.2 MEASUREMENT\u00a0<\/p>\n<p> 1.3 SCOPE OF PROJECT\u00a0<\/p>\n<p> 2.1 CURRENT PROCESS KEY PERFORMANCE INDICATORS\u00a0<\/p>\n<p> 2.2 KEY VARIABLES\u00a0<\/p>\n<p> 2.3 GOALS\u00a0<\/p>\n<p> 3.0\u00a0PROCESS (CREATING A DATA-SET):\u00a0<\/p>\n<p> 3.1 A. I SOLUTION DESIGN\u00a0\u00a0<\/p>\n<p> 3.3 INFRASTRUCTURE AND INTEGRATION\u00a0\u00a0<\/p>\n<p> 3.4 PROTOTYPE DEMO\u00a0\u00a0<\/p>\n<p> 4.0 RECOMMENDATIONS:\u00a0\u00a0<\/p>\n<p> 4.1 IMPLEMENTATION AND DEPLOYMENT\u00a0\u00a0<\/p>\n<p> 5.0 MONITORING AND CONTROL:\u00a0\u00a0<\/p>\n<p> 6.0 SUMMARY\/CONCLUSION\u00a0\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> COMPANY OVERVIEW\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> CaseStack is an American company that provides supply chain management (SCM) services. CaseStack\u2019s core competence is their consolidation programs\u00a0and cross-dock functions\u00a0for major retailers in\u00a0the\u00a0Consumer-packaged goods industry.\u00a0They\u00a0operate as a 4PL which involves warehousing, transportation, and supply chain management software (SCMS). CaseStack consolidation programs include\u00a0services\u00a0to\u00a0companies like Walmart, Target and many more.\u00a0At\u00a0a\u00a0minimum\u00a0of\u00a065 percent,\u00a0a majority\u00a0of\u00a0their\u00a0offerings include\u00a0CPG\u00a0which\u00a0mostly\u00a0comprise of\u00a0dry box food like Romain Noodles or your favorite cereal. CaseStack uses\u00a0proprietary\u00a0software as a service platform for its collaborative retailer consolidation programs. CaseStack has been recognized in Food Logistics&#8217; Top Providers. Our goal is to provide a solution to their inventory problems caused mainly\u00a0by the variation of\u00a0their\u00a0inventory. They are dealing\u00a0with\u00a0financial losses due\u00a0to\u00a0scrapping, mis picks;\u00a0they need a solution to stop the inventory discrepancies.\u00a0\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> EXECUTIVE SUMMARY\u00a0\u00a0<\/p>\n<p> \u00a0The supply chain is heavily\u00a0dependent\u00a0on having the right inventory in the right place\u00a0at\u00a0the right\u00a0time, this leads\u00a0to major inventory management problems.\u00a0Inventory\u00a0optimization and loss prevention is a huge concern for major companies due to the\u00a0high\u00a0cost of inventory. There is a huge financial implication,\u00a0and\u00a0these inventory losses can\u00a0have a\u00a0significant\u00a0impact on\u00a0the bottom line. Companies need to be able to always track their inventory. One of the biggest problems is the physical count of inventory. Most companies perform a minimum of\u00a0one physical inventory count in a calendar year along with periodical cycle counts usually done in the beginning, middle or end of year. The results of these cycle counts may lead to loss\u00a0of\u00a0inventory (mis picks) and write off\u00a0described\u00a0as scrap (scrapping) or obsolete products.\u00a0We\u00a0have identified that inventory counts are becoming\u00a0increasingly\u00a0harder with the introduction of new items.\u00a0We\u00a0will use several methods to perform image classification analysis,\u00a0and\u00a0this allows\u00a0us to catalogue\u00a0the data.\u00a0After which\u00a0we will split\u00a0our\u00a0data into 3 training models\u00a0then\u00a0we will test it before deploying it. We will perform image classification in\u00a0a\u00a0supervised learning model to define and identify the images and train a model to recognize them using labelled example photos. We will also perform\u00a0an\u00a0object detection machine learning\u00a0technique\u00a0that will allow us to\u00a0identify and locate objects in an image or video. With object detection,\u00a0we can monitor inventory movements to track their precise locations, all while accurately.\u00a0<\/p>\n<p> Example of the\u00a0Product portfolio\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> \u00a01.1 IMPROVEMENT OPPORTUNITY\u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> Reduce Inventory loss by 10 percent\u00a0<\/p>\n<p> Increase productivity by improving the counting process\u00a0<\/p>\n<p> Automate inventory count.\u00a0<\/p>\n<p> Reallocation of the resources\u00a0\u00a0<\/p>\n<p> Cost\u00a0of the process improvement is roughly between\u00a050K-150K\u00a0which out weight the risks.\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> Warehouse floor\u00a0\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0\u00a0<\/p>\n<p> 1.2 MEASUREMENT\u00a0\u00a0<\/p>\n<p> Cycle time-\u00a0identifying\u00a0the current process for efficiency and benchmarking will measure the actual time spent in the process above\u00a0and would ultimately tell us\u00a0how much time is being used to count inventory.\u00a0<\/p>\n<p> Using through-put\u00a0analysis\u00a0to identify variation and trends in our current processes to reduce bottlenecks or redundancies in our process.\u00a0<\/p>\n<p> Reduce error related to human error.\u00a0\u00a0<\/p>\n<p> Increase efficiencies\u00a0\u00a0<\/p>\n<p> Missing inventory percentage.\u00a0<\/p>\n<p> Average\u00a0Monthly accuracy is below industry standards\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> 1.3 SCOPE OF PROJECT\u00a0\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> We will\u00a0be\u00a0identifying\u00a0the areas that have the most inefficiencies and work to implement a process that will reduce inventory count time along with increasing efficiencies by using image classification.\u00a0Image\u00a0classification uses\u00a0supervised learning model along with performing object detection machine learning techniques to locate objects in an image.\u00a0The entire process of image classification starts with our dataset. We need to provide our\u00a0AutoML\u00a0model with a series of examples and instances of our diverse array of products so that it can learn how best to identify and classify images; this takes us back to our dataset. Creating our dataset is the first step we need to take in tackling our inventory business problem which revolves around the cost we incur by not having a product when it is needed or by having too much of a product than can be sold before it expires. We aim to train our model into a high-quality image recognition model.\u00a0One way\u00a0we are going\u00a0to rapidly evaluate the performance of\u00a0our\u00a0algorithm is to create a\u00a0training\u00a0and\u00a0testing\u00a0split of your dataset. A training dataset is used to prepare and train a model. We imagine that the test dataset is brand-fresh and that the algorithm&#8217;s output values are hidden.\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a02.1 CURRENT PROCESS KEY PERFORMANCE INDICATORS\u00a0<\/p>\n<p> \u00a0Inventory is currently counted by warehouse associates\u00a0with a 95 percent accuracy\u00a0\u00a0<\/p>\n<p> First Article inspection to take image of\u00a0the product and the dimensions\u00a0<\/p>\n<p> Risk\u00a0of over stocking or under stocking\u00a0scrap cost the company\u00a0450k last year\u00a0<\/p>\n<p> Cost of\u00a0carrying\u00a0inventory- each pallet space costs\u00a038.50 per month.\u00a0<\/p>\n<p> First Article Inspection is performed on all new items this include a picture,\u00a0weight\u00a0and cube of the item.\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> 2.2 KEY VARIABLES\u00a0\u00a0<\/p>\n<p> Catalog of all the products including dimensions and weight\u00a0\u00a0\u00a0<\/p>\n<p> Cubic measure of all the item\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> 2.3 GOALS\u202f\u00a0<\/p>\n<p> Our current process currently\u202frequires\u00a0manpower\u00a0that we can reassign\u00a0once implemented\u00a0\u00a0\u00a0<\/p>\n<p> Throughput will have an inverse relationship with the cycle time\u202f\u00a0<\/p>\n<p> Reduction of missing units\u202f\u00a0<\/p>\n<p> Increase in the operation efficiencies\u00a0\u00a0<\/p>\n<p> Higher performance and lower cost\u00a0\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> 3.0\u00a0PROCESS (CREATING A DATA-SET):\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> The entire process of image classification starts with our\u00a0collection of label data. We need to provide our\u00a0AutoML\u00a0model with a series of\u00a0pictures and instances of our diverse array of products so that it can learn how best to identify and classify images; this takes us back to our dataset. Creating our dataset is the first step we need to take in tackling our inventory business problem which revolves around the cost we incur by not having a product when it is needed or by having too much of a product than can be sold before it expires. We aim to train our model into a high-quality image recognition model.\u00a0\u00a0<\/p>\n<p> The process starts when we Upload training images to the cloud storage using some commands to create buckets; our different categories\/classes of products to become our source the training data. After uploading our images, we would create our dataset by creating a CSV file where each row contains a URL to a training image and the associated label for that image. This involves using commands that engineer a way for our\u00a0AutoML\u00a0model to access the training dataset. These commands copy the images and updates the csv file with the uploaded images that will eventually be put into the cloud storage bucket (categories). Finally, we Inspect our images by filtering them using labels which prepares our training image dataset.\u00a0\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> Train\u00a0<\/p>\n<p> This\u00a0process requires supervised\u00a0learning\u00a0approach to train a model.\u00a0Once we\u00a0have labelled\u00a0our\u00a0dataset, you can train a model with a supervisory signal (the labels), which tells the model the output it should create for each input and, if it doesn&#8217;t, you can tweak the model&#8217;s parameters so that it will produce the proper result for that input next time. This is referred to as supervised learning (or training).\u00a0<\/p>\n<p> Depending on the quantity of files and queued models for training, training might take anywhere from 2 to 8 hours. If your model is taking longer than expected, you may upgrade to a premium plan to be bumped to the head of the queue and have additional computational resources assigned.\u00a0<\/p>\n<p> Purpose of training:\u00a0<\/p>\n<p> One way to rapidly evaluate the performance of an algorithm on your problem is to create a train and test split of your dataset. A training dataset is used to prepare and train a model. We imagine that the test dataset is brand fresh and that the algorithm&#8217;s output values are hidden.\u00a0<\/p>\n<p> AutoML\u00a0Model to train an image:\u00a0<\/p>\n<p> We can use\u00a0AutoML\u00a0to train our own model using our data. To determine the optimal strategy to train our models, it employs NAS (Neural Architecture Search). The only thing left is to collect data\u00a0in order to\u00a0improve the model&#8217;s accuracy.\u00a0<\/p>\n<p> Steps of training:\u00a0<\/p>\n<p> \u2022\u00a0\u00a0\u00a0\u00a0\u00a0 Set up the environment for your project.\u00a0<\/p>\n<p> \u2022\u00a0\u00a0\u00a0\u00a0\u00a0 Save the photos on your computer for training purposes.\u00a0<\/p>\n<p> \u2022\u00a0\u00a0\u00a0\u00a0\u00a0 Create an image classification system.\u00a0<\/p>\n<p> \u2022\u00a0\u00a0\u00a0\u00a0\u00a0 Make a file called index.csv and save it to the bucket.\u00a0<\/p>\n<p> \u2022\u00a0\u00a0\u00a0\u00a0\u00a0 Create a dataset and add index.csv to it.\u00a0<\/p>\n<p> \u2022\u00a0\u00a0\u00a0\u00a0\u00a0 Educate the model.\u00a0<\/p>\n<p> \u2022\u00a0\u00a0\u00a0\u00a0\u00a0 Use the model to make predictions.\u00a0<\/p>\n<p> \u2022\u00a0\u00a0\u00a0\u00a0\u00a0 Use Python and the Restful API to make API requests.\u00a0<\/p>\n<p> 3.1\u00a0PROCESS\u00a0EVALUATION\u00a0\u00a0<\/p>\n<p> We will be splitting our data set into 3 sets, those being a training set, evaluation set, and validation set. With evaluation we will be evaluating the model\u2019s performance based on what it accomplishes. We will feed the model our training set, and it will learn from the training set. We will then expose it to our evaluation set where we will test hyperparameters. We utilize the evaluation set to double-check our training set.\u00a0<\/p>\n<p> Measure the increased efficiency\u00a0\u00a0<\/p>\n<p> Cost benefits over the current process\u00a0\u00a0<\/p>\n<p> Cost of implementation\u00a0<\/p>\n<p> Pilot Production.\u00a0\u00a0<\/p>\n<p> Performed in a test environment\u00a0<\/p>\n<p> 3.2\u00a0DEPLOYMENT AND MAINTENANCE\u00a0<\/p>\n<p> Once we have completed\u00a0defined our business requirements and defined our goals, we want to make sure we have addressed\u00a0the problem and defined a solution.\u00a0Furthermore,\u00a0by completing\u00a0several\u00a0processes\u00a0to collect the data and label\u00a0the\u00a0data\u00a0we then\u00a0tested\u00a0our\u00a0dataset against the training data\u00a0to evaluate the best fit model. We\u00a0will then\u00a0move to the deployment\u00a0of the best fit\u00a0model\u00a0into\u00a0live\u00a0production.\u00a0The production environment is\u00a0when we\u00a0integrate\u00a0the best fit model into the exiting production\u00a0environment.\u00a0We will start with a pilot program were by we start with\u00a0part of the warehouse\u00a0and will continue evaluating our model. It maybe necessary to repeat or\u00a0perform\u00a0previous steps\u00a0in order to\u00a0fine tune\u00a0the\u00a0model.\u00a0<\/p>\n<p> Production Deployment\u00a0Roll out\u00a03\u00a0Stages\u00a0<\/p>\n<p> Pilot Production\u00a0partial warehouse area\u00a0<\/p>\n<p> Test the model\u00a0in\u00a0entire warehouse for two months\u00a0<\/p>\n<p> Full Deployment\u00a0\u00a0<\/p>\n<p> Once we have reached our desired output the model is fully deployed.\u00a0\u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> \u00a0<\/p>\n<p> Work cited.\u00a0<\/p>\n<p> \u00a0sharma, Nikita. \u201cStrategies for Productionizing Our Machine Learning Models.\u201d\u00a0Medium, Towards Data Science, 21 June 2020,\u00a0https:\/\/towardsdatascience.com\/strategies-for-productionizing-our-machine-learning-models-53399a3199da.\u00a0\u00a0<\/p>\n<p> \u201cCasestack,\u00a0Inc..\u201d\u00a0SupplyChainBrain\u00a0RSS, https:\/\/www.supplychainbrain.com\/directories\/98-supplier-directory\/listing\/97-casestack-inc.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>CONTENT\u00a0 \u00a0 1.1 IMPROVEMENT OPPORTUNITY\u00a0 1.2 MEASUREMENT\u00a0 1.3 SCOPE OF PROJECT\u00a0 2.1 CURRENT PROCESS KEY PERFORMANCE INDICATORS\u00a0 2.2 KEY VARIABLES\u00a0 2.3 GOALS\u00a0 3.0\u00a0PROCESS (CREATING A DATA-SET):\u00a0 3.1 A. I SOLUTION DESIGN\u00a0\u00a0 3.3 INFRASTRUCTURE AND INTEGRATION\u00a0\u00a0 3.4 PROTOTYPE DEMO\u00a0\u00a0 4.0 RECOMMENDATIONS:\u00a0\u00a0 4.1 IMPLEMENTATION AND DEPLOYMENT\u00a0\u00a0 5.0 MONITORING AND CONTROL:\u00a0\u00a0 6.0 SUMMARY\/CONCLUSION\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[10],"class_list":["post-79011","post","type-post","status-publish","format-standard","hentry","category-research-paper-writing","tag-writing"],"_links":{"self":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/posts\/79011","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/comments?post=79011"}],"version-history":[{"count":0,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/posts\/79011\/revisions"}],"wp:attachment":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/media?parent=79011"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/categories?post=79011"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/tags?post=79011"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}