added

Python SDK: New Function to Generate Parsing Evaluation Reports

We are pleased to announce a highly anticipated update to our Python SDK, enabling our users to generate reports that evaluate the quality of our parsing models.

😍 Why it’s a big deal for HrFlow.ai users?

With the new capability to generate parsing evaluation reports, HrFlow.ai users can:

  1. Assess the Accuracy of HrFlow.ai parsing models (Quicksilver, Hawk, Mozart) to select the most suitable one for their specific use case.
  2. Identify Improvements with each monthly product release to continuously enhance performance.
  3. Compare Models by evaluating our state-of-the-art models against competitor performance, ensuring the best choice for their needs.

🔧 How does it work?

To generate a Profile Parsing evaluation report, you can:

  1. Install the HrFlow.ai Python SDK using the command pip install -U hrflow or conda install hrflow -c conda-forge.
  2. Log in to the HrFlow.ai Portal at hrflow.ai/signin.
  3. Obtain your API key from developers.hrflow.ai/docs/api-authentication.
  4. Ensure the Profile Parsing API is enabled in your account
  5. Create a Source as described at developers.hrflow.ai/docs/connectors-source.
  6. Upload your profiles to the source you created.
  7. Call the generate_parsing_evaluation_report() function from our Python SDK with the required arguments to generate an Excel report.
Python SDK: New Function to Generate Parsing Evaluation Reports Python SDK: New Function to Generate Parsing Evaluation Reports*

💡 Useful Links