I think it’s a bad idea to evaluate researchers by the number of papers or patents they have.” The source is Hiroshi Maruyama, former director of IBM Research Laboratory.

EvaluationNumerical CriteriaQuantitative KPIs

tokoroten

  • I’m looking for a source that says that the director of a research institute, I think it was IBM or something, said something along the lines of “If you make the performance evaluation criteria of researchers public, they will act accordingly and fall into the local optimum. I’m trying to find the source of the story, but I can’t find it. Hmmm, I wonder who said that.

  • To all aspiring corporate researchers” has arrived and I’d like to summarize it briefly.

    • Not only companies, but especially corporate researchers should [Impacting the world
    • p.3
    • The letter “About Evaluation” is introduced on p. 141
      • Indicators based on the number of papers or citations do not apply well to corporate research institutions
      • Because the value of our research results is first and foremost expressed in our advanced products and services
      • The evaluation should be [addition method
        • Otherwise, goals are set low at the beginning of the year and motivation to work outside of those goals is lowered.
        • About [Transparency of evaluation
        • Transparency means making clear the reasons for the evaluation.
        • It’s not about creating a standard in advance that says, “If you get this far, you’ll get an A.”
          • This criterion becomes a subtraction method that says, “If you don’t go that far, you’re not an A.”
        • Measuring by the number of patents and papers is not appropriate because there is a huge gap in the quality of individual patents and papers.
        • If you say, “I have made an impact on the world and I want to be recognized for it,” you should speak up and claim it. - appeal (to) have a responsibility to doappeal responsibility
        • nishio.icon So instead of creating metrics in advance, you evaluate it posterior after things happen.

Future Discussions

  • From the evaluator’s point of view, “Then how should I evaluate?”
    • Challenges not only for researchers, but for all types of work where results cannot be quantitatively measured.
    • Drucker’s knowledge worker concept = type of work for which results cannot be quantitatively measured.
  • From the perspective of the person being evaluated.
    • When there is no numerical standard for evaluation, what can we claim as an achievement?”
      • → Claims to have “made an impact on the world
    • What to do when numerical criteria for evaluation are established?”
      • There are organizations and people who work there where numerical standards have already been introduced.
      • Even if it seems foolish to set a numerical standard, it is within the freedom of individual organizations
      • On the other hand, the extent to which workers in the organization are free to devote any amount of effort to improving their evaluation of the organization based on strange criteria is a matter of freedom on the part of the workers
      • → Just ignore the evaluation criteria and act accordingly.

This page is auto-translated from /nishio/研究者の評価に数値基準を設けてはいけない using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.