Summary
The Versatile Video Codec or H.266 (“VVC”) Objective PAtent Landscape (“OPAL”) tool objectively scores the statistical essentiality of patent publications to VVC functionality. It was created using a machine learning algorithm trained on a large set of expert reviewed VVC patents. A short summary of the methodology follows:
- Universe of Patents Subject to Analysis - 4.7M patent publications, world-wide
- Patents Evaluated Manually by Experts - 3,000+ unique families evaluated manually by technical experts at Scintillation Research.
- AI Training - Patents were vectorized using FastText, and a binary classification algorithm was trained to predict essentiality to the universe of patents.
- ML Performance - Training complied with accepted machine learning practices and the results achieved a high F-1 score of 0.97.
Basis for Using Machine Learning to Predict Essentiality
The question of who owns VVC standard-essential patents is an issue that many institutions and companies are grappling with as these important technologies are deployed. On top of the confusion HEVC and AVC has led to the fracture patent pool approach. In practice this presents two issues: the first being ownership and the second is the royalty stack grows if VVC were to follow the same pattern as its predecessors.
In addition, a select few have claimed through declarations that they own VVC essential patents, but there is currently no economically sensible way to evaluate these claims. Furthermore, experience shows us that there can be many potentially essential patents that are never declared and that are not encumbered by any FRAND obligations. Unified Patents' VVC “OPAL” landscape not only identifies self-declared VVC patents but also these undeclared and FRAND-unencumbered patents.
According to OPEN, Unified Patents’ JVET standards submissions database, over 8,575 technical contributions have been submitted. These top 10 account for 70% of all technical contributions.
However, unlike 3GPP/ETSI, where companies self-declare – and tend to over declare – their patent portfolios, no such phenomenon exists in ITU-T’s standard-setting system. Instead of self-declarations, companies will submit “Patent Statement and Licensing Declarations” forms that often include no patents and are blanket declarations. This makes the landscape very difficult to ascertain. These declarations received are not even remotely close to those participating in the technical contributions as demonstrated below. The issue here is that Nokia and Apple comprise 86% of these declarations which clouds the picture of ownership based on the technical contributions. As of June 15, 2022, there were only 5 declarations in 2022.
Recognizing this, Unified Patents turned to machine learning based analytics to predict the VVC essentiality for hundreds of thousands of patents relevant or tangential to telecom. The criteria for this analytics were unwavering objectiveness, transparency, cost-efficiency, and consistency and sufficient reliability.
Creating a Training Set
The training set for the OPAL VVC essentiality model comprises of positive and negative labels derived from patents reviewed to be essential by VVC pools administered by MPEG-LA and Access Advance, and further supplemented by ~3,500 patent families reviewed by experts at Scintillation Research.
As of the first quarter of 2023, patent pools MPEG-LA and Access Advance deemed a total of ~5,500 patent publications to be essential to the VVC standard. These patents were expanded by family and assigned to the set of positive labels. Another large set of patents relating to video compression were inferred to be negative labels. These negative labels consist of randomly-sampled patents belonging to MPEG-LA and Access Advance licensors, which were either not reviewed by or deemed essential to VVC by the two patent pools.
Unified Patents then retained the experts at Scintillation, who reviewed ~3,500 patent families. Of these, ~900 were deemed as highly essential and were placed in the set of positive labels. The remainder were expanded by family and placed in the set of negative labels.
In total, the training set comprised of ~24,000 positive labels and ~47,000 negative labels.
Predicting Essentiality via Machine Learning
With Scintillation’s training set, Unified Patents used the positive and negative labels to train a binary classification algorithm to predict potential essentiality to VVC. Textbook machine learning techniques and good practices were adhered to in training this binary classification algorithm.
FastText was used to vectorize the title, abstract, claims, and CPC codes of each patent in the training set. Initially, 400 dimensions were used to distinguish the vectors but this was reduced to 10 to reduce the risk of overfitting. An ensemble of the best performing models – such as XGBoost and shallow extra-randomized forest – were aggregated together to make the ultimate essentiality prediction.
In training the model, a stratified K-fold cross-validation process was deployed. This stratified resampling was used to correct any optimistic errors resulting from imbalanced data sets as well as to preserve the proportionality among the cross-validation testing and training sets. The class weights of the positive labels and negative labels were also balanced.
Applying the Trained Model to the Universe
With the training set in place, Unified then had to find a relevant universe to determine essentiality rates. This was done in a way to capture the largest set of relevant patents. The universe consists of all patent publications that:
- Belong to CPC code H04/N19;
- Cite to an ITU-T, MPEG, VCEG, JVT, JCT-VC, JVET, or AOM document;
- Cite to a document that mentions a popular video codec or image compression standard such as AVC, H.265, or JPEG;
- Have titles or abstracts containing phrases such as "versatile video coding" or "video compression" or other phrases strongly associated with the VVC standard;
- Are declared to be essential to AVC, HEVC, or VVC on the ITU-T database;
- Are claimed to be essential to AVC, HEVC, or VVC by patent pools such as MPEG-LA and Access Advance;
- Were reviewed by the experts at Scintillation.
These patent were then expanded by family, resulting in a universe of approximately 5 million patent publications.
The trained model was applied to this universe and a score was assigned to each patent publication. In the figure below, the gray depicts the distribution of scores in the universe. The red distribution corresponds to the negative labels and the blue to the positive labels.
Essentiality Score Distribution
OPAL Performance
The performance of the ML algorithms resulting from the training earned very high F-1 scores of 0.97 for VVC. The F-1 scoring captures the harmonic mean of precision and recall where precision equals the number of true positives divided by the number of all positive results and recall equals the number of true positives divided by the number of all samples that should have been identified as positive.
Comments
0 comments
Please sign in to leave a comment.