Skip to content

michaelcox/fizzbuzz

Repository files navigation

Coding Exercise

Background

At Vardera, we use machine learning and computer vision to appraise hard assets - coins, trading cards, furniture, fine art, and other collectibles. Our pipeline ingests items from auction platforms, scrapes images and metadata, generates image embeddings, and produces valuations.

The log files in this repo are real production logs from a single pipeline run processing one item. They capture the full lifecycle: scraping listing data, downloading images, resizing and processing them, generating embeddings via AWS Titan, and saving results to our database. You'll see logs at various severity levels (INFO, ERROR, NOTICE, etc.) reflecting different stages and outcomes of the process.

Files

  • main.py — Your working file (currently just prints "Hello World")
  • sample-logs.json — Sample logs in JSON format
  • sample-logs.csv — Same logs in CSV format (identical data, different format—use whichever you prefer)

Run your code with:

python main.py

Autocomplete is enabled. You can use Google or AI tools—just be ready to explain your code.

Task

Write a function that parses the log data and returns a count of each log level.

Use whichever log file you prefer. Your output should be a dictionary mapping log levels to their counts, e.g.:

{"INFO": 123, "ERROR": 5, "WARNING": 12}

Notes

  • Think out loud as you work
  • It's fine to look at the data first
  • Ask questions if something is unclear

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors