Uploaded image for project: 'NIF'
  1. NIF
  2. NIF-11743

Benchmark owlsim computation time in initial API

    XMLWordPrintable

Details

    • NIF
    • Issues closed as MONARCH has transitioned from UCSD services

    Description

      in order to get a sense of the time it might take to compute similarity analyses, we need to do a set of benchmarks to assess the current system.

      the benchmarks to compute are to see how long "novel" similarity comparisons will take given a set of phenotypes you provide. i am looking for the benchmarks to test how a few different things affect computation time. specifically, let's test:

      1. number of annotated entities in the cache
      2. number of annotations in the query set (what you feed it)
      3. complexity of the ontology used (hpo vs uberpheno vs mp)
      4. memory allocated
      5. seen vs not-previously-seen annotated classes

      doing this on your machine w/o the API will give us the best-possible running times (since there won't be any network latency), but then we can add in the API calls to compare, when ready.

      Attachments

        Activity

          People

            juest4 Jeremy Espino
            nlw Nicole Washington
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: