I think it’s a bad idea to evaluate researchers by the number of papers or patents they have.” The source is Hiroshi Maruyama, former director of IBM Research Laboratory.
EvaluationNumerical CriteriaQuantitative KPIs
-
I’m looking for a source that says that the director of a research institute, I think it was IBM or something, said something along the lines of “If you make the performance evaluation criteria of researchers public, they will act accordingly and fall into the local optimum. I’m trying to find the source of the story, but I can’t find it. Hmmm, I wonder who said that.
- ceekz
-
To all aspiring corporate researchers-Research That Matters Book - 2009/11/4 Hiroshi Maruyama
-
I think it was this book
-
- Hiroshi Maruyama was the Director of IBM Tokyo Research Laboratory from 2006 to 2009, and has been the Chief Strategy Officer of Preferred Networks since April 2016.
- ReiOdaira
-
I have heard that story from Mr. Maruyama himself, but to be precise, he said that “if we set a strict numerical standard for evaluation, all researchers are so smart that they will take actions optimized for it” rather than making the evaluation criteria public.
-
At the Institute’s general meeting, you said, “So we don’t establish numerical criteria for evaluation. I don’t know if they really didn’t establish them or just didn’t disclose them.
-
- dmikurube
-
I was also a student, and I heard this at an all hands meeting at the institute, but it was something like, “If we could really make appropriate standards, we might make them, but it’s practically impossible, and the world and the environment change so quickly that it’s too much to keep up with them and change them every time. So, I would rather not set a clear standard, but rather say, “Everyone should think for themselves what they should do to make a real impact on the world, and then do their own work or create their own work.
-
- ceekz
-
To all aspiring corporate researchers” has arrived and I’d like to summarize it briefly.
- Not only companies, but especially corporate researchers should [Impacting the world
- Calling Research with Impact [Research That Matters
- p.3
- - Pasteur Quadrant - Boer 、 Edison. 、 path tool
- The letter “About Evaluation” is introduced on p. 141
- Indicators based on the number of papers or citations do not apply well to corporate research institutions
- Because the value of our research results is first and foremost expressed in our advanced products and services
- The evaluation should be [addition method
- Otherwise, goals are set low at the beginning of the year and motivation to work outside of those goals is lowered.
- Evaluating by goal attainment rate will lead to ignoring opportunities.
- If an opportunity comes along that you didn’t expect when you set your goal, you should seize it.
- But if you devote time and resources to things that are not planned, the probability of accomplishing what you planned to do goes down.
- About [Transparency of evaluation
- Transparency means making clear the reasons for the evaluation.
- It’s not about creating a standard in advance that says, “If you get this far, you’ll get an A.”
- This criterion becomes a subtraction method that says, “If you don’t go that far, you’re not an A.”
- Measuring by the number of patents and papers is not appropriate because there is a huge gap in the quality of individual patents and papers.
- If you say, “I have made an impact on the world and I want to be recognized for it,” you should speak up and claim it. - appeal (to) have a responsibility to doappeal responsibility
- So instead of creating metrics in advance, you evaluate it posterior after things happen.
- Otherwise, goals are set low at the beginning of the year and motivation to work outside of those goals is lowered.
- Not only companies, but especially corporate researchers should [Impacting the world
Future Discussions
- From the evaluator’s point of view, “Then how should I evaluate?”
- Challenges not only for researchers, but for all types of work where results cannot be quantitatively measured.
- Drucker’s knowledge worker concept = type of work for which results cannot be quantitatively measured.
- From the perspective of the person being evaluated.
- When there is no numerical standard for evaluation, what can we claim as an achievement?”
- → Claims to have “made an impact on the world
- What to do when numerical criteria for evaluation are established?”
- There are organizations and people who work there where numerical standards have already been introduced.
- Even if it seems foolish to set a numerical standard, it is within the freedom of individual organizations
- On the other hand, the extent to which workers in the organization are free to devote any amount of effort to improving their evaluation of the organization based on strange criteria is a matter of freedom on the part of the workers
- → Just ignore the evaluation criteria and act accordingly.
- When there is no numerical standard for evaluation, what can we claim as an achievement?”
This page is auto-translated from /nishio/研究者の評価に数値基準を設けてはいけない using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.