VN1 Forecasting - Accuracy Challenge Phase 1
Share:
Ongoing
competition-bg

VN1 Forecasting - Accuracy Challenge Phase 1

Flieber, Syrup Tech, and SupChains Launch an AI-Driven Supply Chain Forecasting Competition

VN1 Forecasting - Accuracy Challenge
Machine Learning/AI
Enterprise
E-commerce/Retail
Total Prize 20,000
00
:
00
Scroll To Top

Datathon Type

challenge_cup

Challenge with monetary prizes

No. of Users

649

No. of Submission

281

No. of Teams

56

Community chat

Description

Description of the competition

Welcome to the VN1 Forecasting - Accuracy Challenge! Here’s everything you need to know to participate.

 

🚀 Take on the VN1 Forecasting Accuracy Challenge and compete for a share of $20,000 in prizes! With a grand prize of $10,000 you’ll not only enhance your skills in predictive modeling and data analysis but also gain insights from top industry professionals. Plus, standout participants could unlock internship and placement opportunities. Showcase your talent, gather valuable feedback, and elevate your career today! 💡

 

Participants in this datathon are tasked with accurately forecasting future sales using
historical sales, and pricing data provided. The goal is to develop robust
predictive models that can anticipate sales trends for various products across different
clients and warehouses. Submissions will be evaluated based on their accuracy and bias
against actual sales figures. The competition is structured into two phases.

 

Phase 1  [12th of September 2024 - 3rd of October 2024 ]

In this phase you will use the provided Phase 0 sales data to predict sales for Phase 1. This phase will last three weeks, during which there will be live leaderboard updates to track your progress and provide feedback on your predictions. At the end of Phase 1, you will receive the actual sales data for this phase.

 

Phase 2 [ 3rd of October 2024 - 17th of October  2024]

Using both Phase 0 and Phase 1 data, you will now predict sales for Phase 2. This second phase will last two weeks, but unlike Phase 1, there will be no leaderboard updates until the competition ends.

Description of the competition

Welcome to the VN1 Forecasting - Accuracy Challenge! Here’s everything you need to know to participate.

 

🚀 Take on the VN1 Forecasting Accuracy Challenge and compete for a share of $20,000 in prizes! With a grand prize of $10,000 you’ll not only enhance your skills in predictive modeling and data analysis but also gain insights from top industry professionals. Plus, standout participants could unlock internship and placement opportunities. Showcase your talent, gather valuable feedback, and elevate your career today! 💡

 

Participants in this datathon are tasked with accurately forecasting future sales using
historical sales, and pricing data provided. The goal is to develop robust
predictive models that can anticipate sales trends for various products across different
clients and warehouses. Submissions will be evaluated based on their accuracy and bias
against actual sales figures. The competition is structured into two phases.

 

Phase 1  [12th of September 2024 - 3rd of October 2024 ]

In this phase you will use the provided Phase 0 sales data to predict sales for Phase 1. This phase will last three weeks, during which there will be live leaderboard updates to track your progress and provide feedback on your predictions. At the end of Phase 1, you will receive the actual sales data for this phase.

 

Phase 2 [ 3rd of October 2024 - 17th of October  2024]

Using both Phase 0 and Phase 1 data, you will now predict sales for Phase 2. This second phase will last two weeks, but unlike Phase 1, there will be no leaderboard updates until the competition ends.

Organisers & Sponsors

Timeline

12

Challenge starts

September at 05:00 UTC

Add to Calendar

03

Challenge ends (Public leaderboard)

October at 15:00 UTC

Add to Calendar

03

Challenge ends (Private leaderboard)

October at 15:00 UTC

Add to Calendar

Description of Timeline

In Phase 1, you will use the provided Phase 0 sales data to predict sales for Phase 1. This
phase will last three weeks, during which there will be live leaderboard updates to track
your progress and provide feedback on your predictions. At the end of Phase 1, you will
receive the actual sales data for this phase.


Following this, Phase 2 begins. Using both Phase 0 and Phase 1 data, you will now predict
sales for Phase 2. This second phase will last two weeks, but unlike Phase 1, there will be
no leaderboard updates until the competition ends – only the last submission for Phase 2
will be taken into account. 


Phase 1 – Starts on Thursday, the 12th of September (08:00 AM UTC) and ends on Thursday, the 3rd of October (07:59 AM UTC)
Phase 2 – Starts on Thursday, the 3rd of October (08:00 AM UTC) and ends on Thursday, the 17th of October (18:00 PM UTC)
Winner announcement – Last week of October (Week of 28th of October)
Online conference for winners to present their solutions & insights – November 2024

Evaluation

Performance & Evaluation

 

Evaluation Metrics and Criteria

Your submissions will be evaluated based on a score combining accuracy and bias (both normalized as percentages).

 

In practice, the following code will be used to evaluate submissions, 

 

def data_competition_evaluation(phase="Phase 2", name=""):

 

    # Submission should be loaded from a .csv file.    

    submission = pd.read_csv(name)

    assert all(col in submission.columns for col in ["Client""Warehouse""Product"])

    submission = submission.set_index(["Client""Warehouse""Product"])

    submission.columns = pd.to_datetime(submission.columns)

    assert (~submission.isnull().any().any())

 

    # Load Objective

    objective = pd.read_csv(f"{phase} - Sales.csv").set_index(["Client""Warehouse""Product"])

    objective.columns = pd.to_datetime(objective.columns)

    assert (submission.index == objective.index).all()

    assert (submission.columns == objective.columns).all()

    

    # This is an important rule that we communicate to competitors.

    abs_err = np.nansum(abs(submission - objective))

    err = np.nansum((submission - objective))

    score = abs_err + abs(err)

    score /= objective.sum().sum()

    print(f"{name}:,", score) #It's a percentage

 

 

Benchmark or Baseline

Participants can use provided baselines for comparison. They serve as a starting point for participants to build and refine their models, aiming to surpass these initial benchmarks in terms of accuracy and efficiency.

By adhering to these evaluation metrics and leveraging the provided baselines, participants can demonstrate the robustness and effectiveness of their predictive models in forecasting sales trends effectively.

Presentation and Documentation

To submit a prediction, follow these rules,  

  • Submit your forecasts in a .csv file 
  • Make sure there are no missing values in your submissions.  
  • Submit a forecast for all products and warehouses. 
  • You need to forecast 13 weeks of demand.  
  • For Phase 2, you must submit your commented code with your forecasts. We need to be able to replicate the winning forecasts; if you use stochastic models, please set a seed to get reproducible results. 
Join our private community in Discord

Keep up to date by participating in our global community of data scientists and AI enthusiasts. We discuss the latest developments in data science competitions, new techniques for solving complex challenges, AI and machine learning models, and much more!