0

I've create a custom vision project to recognise characters (A, B, C...). What is interesting: if I upload an image of a character (in this case an "N") to the vision API portal it will tell me that it is 99.9% sure it is an "N":

enter image description here

If however I use the client libraries to predict the very same image, I'm getting 53% that it is a "W" and only 37% that it is an "N":

enter image description here

The code to get the prediction on my client:

var client = new CustomVisionPredictionClient()
{
    ApiKey = predictionKey,
    Endpoint = endpoint
};

var result = await client.PredictImageAsync(Guid.Parse(projectId), imageStream).ConfigureAwait(false);
var prediction = result.Predictions.FirstOrDefault();

Where does this difference come from and how to fix because according to the tests I did by uploading images the results are close to 100% correct no matter which character image I upload?

UPDATE: I noticed that there was an update for the client libraries. They went from 0.12pre to 1.0stable. After the update the PredictImageAsync is gone and replaced with DetectImageAsync. This expected as an additional parameter a model name. I tried using the name of the iteration and after a while the method returns with an internal server error. So not sure what to try next.

4

1 回答 1

1

上面的评论为我指明了正确的方向 - 谢谢!

新的客户端库有两种方法ClassifyImageDetectImage(以及它们的各种变体),它们替换了以前使用的方法,包括PredictImage我在客户端库的预览版中使用的方法。

ClassifyImage当然应该使用对图像进行分类(这是我想做的) 。新代码如下所示,并提供了几乎 100% 正确的预测:

var client = new CustomVisionPredictionClient()
{
    ApiKey = predictionKey,
    Endpoint = endpoint
};

var result = await client.ClassifyImageAsync(Guid.Parse(projectId), "Iteration12", imageStream).ConfigureAwait(false);
var prediction = result.Predictions.FirstOrDefault();
  • endpoint在我的例子中,是视觉 API 所在区域的 URL https://westeurope.api.cognitive.microsoft.com
  • predictionKey在您项目的 CustomVision.AI 站点上可用,因此projectId
  • 参数是要使用的迭代的publishedName名称(在我的例子中是“Iteration12”
于 2019-03-28T14:38:05.423 回答