Retail Cannabis Rating Index
I’ve used the function below calculate an index ranking based on measures I track from the dried cannabis reviews.
Here are the variables and relative weights I’ve used for the function:
- Variance from
- Average Price Per Gram by Package Size (PP) (15%)
- Average Content by Dominant Cannabinoid (CD) (10%)
- Average Content by Cultivar Name (CC) (0.5%)
- Average Price Per 100mg THC+CBD by Dominant Cannabinoid (AD) (15%)
- Average Days Packaged Before Purchase, All Purchases (DM) (15%)
- Average Qualitative Rank, All Purchases (RM) (40%)
Basically, it’s the weighted sum of the variances from the benchmark averages I track. Here is the function:
No, it doesn’t add up to 100%, but it doesn’t need to, the relative weights are what matter. Also important to note, my subjective opinion on performance makes up 40% of the index value, the remaining 60% is based on hard numbers, through a layer of competitive analysis.
We’re going to compare all offerings I’ve reviewed based on the above dimensions, first we touch on the order of operations.
I calculate averages for each variable listed above across the dataset. Some averages are taken with respect to product type. For instance, I want to compare a whole flower product to the benchmark average of other whole flower products, so other product types are omitted.
Similarly with content, I don’t want to compare a CBD dominant cultivar against an average comprised of data from THC dominant cultivars, so we section the benchmark calculations accordingly.
We also look at content at the cultivar level, so if I’ve reviewed the same cultivar more than once, the variance between the two is also incorporated in the calculation.
Once we have the benchmarks, I calculate the difference between the averages and the values from each review. Those variances are weighted and tallied using the formula above, which gives us a number that we’ll use to compare each offering to another.
For the calculation (pancakenap rating), big numbers are good. The scores get worse as you move closer to 0, and then way worse as you move into the negative numbers.
We’re going to look at this data in 3 levels of granularity, and here is the first.
Here we look at each listing individually, showing the rating by the bar’s length, and the variance from the average rating with colour.
Hover/tap each bar to see the relevant information about the stats making up the rating. Use the highlighter to find a brand.
Let’s make a few notes on the above graph.
CBD dominant cultivars are ranked higher
The average content for a CBD dominant cultivar is less than the average content for a THC dominant cultivar, the difference is about 3% or 30mg/g. However, there are CBD dominant cultivars that make well above the average for the category and surpass the average of a THC dominant cultivar. Because those offerings vary more from the average of their category, they do better on this list. That’s also not saying that a THC dominant cultivar can’t benefit from a similar span, there just isn’t one represented here.
Aurora’s LA Confidential is not that memorable in my mind, I was surprised to see it up there. I have reviewed TGOD’s LA Confidential as well, which I preferred less, and it made a lower cannabinoid content at a higher price, so the Aurora LA Confidential benefits from its variance from the average content for all my LA Confidential reviews. Similarly with Tantalus Lab’s CBD Skunk Haze. Of the three I’ve reviewed so far, it made the most content, among other positive attributes.
Now we summarize by brand. I keep the formula the same, and all the listing specific measures have been removed or averaged. Hover/tap to see the relevant information.
Stepping back one step further, the brands are now grouped by the producers that own them. Hover/tap to see the relevant information.
Thanks for reading this post. I hope you find it useful when selecting cannabis for yourself.
Couple things I am thinking of implementing here:
I’d like to build a component that incorporates past review scores from the brand, sort of like a reliability quotient. So if the brand has done well in the past, they get a section of points based on that. Similarly, brands that I haven’t preferred start with less.
I wasn’t actually certain if this would be appropriate, both from an accuracy and fairness standpoint. This calculation doesn’t make allowances for the underdogs or producers who’ve recalibrated their SOP. Now that I think about it, there are actually a bunch of reasons why it wouldn’t work… So just a thought for now.
Using the total point range, I’ll implement a calculation that applies a grade to each fo the reviews. Thinking everything below 50% fails, and I’ll mimic the standard letter grade system going up. Doing the quick math, the system means I’d be giving an A+ to one review, and an A- to another. The rest would be B’s and lower. More than half my reviews would receive the failing grade, F, which I probably feel is accurate.