image

Study the main text part of the Intellectual Production Techniques of Engineers with LSTMLSTM-RNN :

%run train_rnnlm.py
#vocab = 7069
epoch       iteration   perplexity  val_perplexity
3           500         207.066     168.164         
7           1000        55.6905     88.8363         
11          1500        32.3283     83.8262         
15          2000        19.5342     96.141          
19          2500        12.4958     115.902         
23          3000        9.15305     138.699         
26          3500        6.81103     159.106         
30          4000        5.71608     178.38          
34          4500        5.07816     201.215         
38          5000        4.33354     223.951         
test
test perplexity: 333.32472655247705

Iā€™m not sure you could have stopped at 15 epochs.

  • ā†’Iā€™ve tried to generate sentences with the model at about 2000 iterations, but subjectively the last model seems to be better.

Sentence generation experiments using LSTM language models:.

  • Intellectual big direction / Realistic :Compare pictures of tuples and repeat cycle again.

  • 7 Engineerā€™s Wall of 7 Ideas That Havenā€™t Came Out of Consciousness ā¹Ideas come up with new ideas.

  • Remember the abstraction about growing up, and understanding is 8 hours frequent ā€œThis has to do with

  • Understanding. It is UC V for philosophical language. Not one side of the table of contents, but the table of contents is back

  • Repeat reading. Think first before you stand up. For example tree Mihaly (Edison), climbing

  • because I am looking at access to James Webb Young. I agree, and ā€œU Theory.ā€

  • Edison, during maintenance and engineering)āø To instruct as often as you edit

  • Wife guru research expeditions use state actively. The break-even point is past this design

The local word connections are pretty smooth, but, well, the sentence as a whole is meaningless.

The above output gives one word and then generates 20 words, but I think it often outputs sentences that have a better mood if they run for a while

  • The wifeā€™s mastery research expedition uses the state positively. So the break-even point stops with the intention of ā€œI canā€™t proceed with the game I thought I needed such a table of contentsā€ when the answer is ā€œit seems relevantā€ out of the products past this design. For example, this somewhat relationship is the origin of the word 15 difficult. On the other hand, it is a way of ceasing to read while moving oneā€™s hand, not something that goes actively learning to a physical teacher. It is important not to be able to distinguish between the two hares when the thought comes to mind, ā€œHe who pursues two hares will not get one hare.ā€ The task that had been first of all to get the topic to connect can be downloaded from the URL somewhat for your experience to record and hit well.

  • Intelligent big direction/Realistic :Again by comparing pictures of the tuple and repeating the cycle, the understanding is verified. The experimental verification may have been some-like associating hypotheses. Experiencing ā¶ being selected by myself, I need to clarify the purpose. I have a quest and a realization. The world is about whale sharks, fun, letting go of the concept of otherness and creating memories efficiently. Now the task of making questions is full and confusing, with the confusion must make new knowledge. For example, ā€œcoming up with an ideaā€ is vague and a big task bud, you also realize the worry that ā€œit would be wrong to be able to do it in your own wordsā€.

This is probably due to the fact that the internal state created by a single word input is far from the distribution of the internal state at the time of learning, since many words are continuously input at the time of learning. Sentence generation takes multiple words as input. The model to find and cut out appropriate start and end points in a word sequence after running for a while seems to work well with the same mechanism as the direct quotation model from the original text.

They can continue a topic related to time for a longer period of time. Only LSTM can retain context for a long time.

  • The ā€œno big timeā€ means that you know about the size of multiple tasks, read through bad replacements without realizing the optimal drop, and do not have a sense of time from the information based on the writing in GTDSensing : The state of not sensing the time based on the experience of others. I tried to work with the algorithm of choosing the option with the highest expected value based on the experience of others. Conversely, looking back is also the same part like the word ā€œabstractā€ to its ā€œgrasp the whole pictureā€ when you think that you have ā€œfoundā€ in line following that the person who is framed in the gap between your afterimage and Cowan is used. It will be quite an observation to make what kind of knowledge you intend to assume based on this training every day. Is it easy in the pyramid of reading speed? For example, ā€œgrasping the big pictureā€.

This must have picked up the headline or something, ā€œChapter 1: Three Ways to Gather Information to Learn Something Newā€. Similarly, some of the footnotes have been picked up, which looks odd.

  • I prefer to do 50mm x 38mm fusen Chapter 1: The driving force that turns the cycle to learn new things:Motivation 8 Chapter 1: The three ways of information gathering to learn new things, how to allocate the three ways of information gathering, as you want to continue where you turn the conversation, you may learn the 1964. This is the birth of the phrase ā€œ30 seconds per bookā€, which means 30 seconds for just one page, and the two hours will be over.

Part of original data

ā€¦ But it is not. First of all, the top priority is excellence, and that Chapter 7: Deciding What to Learn Yourself Management Strategies 234. Chapter 7: Deciding What to Learn is a Self-Management Strategy 235 and thus an opportunity for growthNote 15ā€¦ Ah, I see. So thatā€™s how it is. Need to remove this: Page Row-Based Language Models.


This page is auto-translated from /nishio/č؀čŖžćƒ¢ćƒ‡ćƒ«ä½œęˆå®Ÿéؓ2018-09-28 using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. Iā€™m very happy to spread my thought to non-Japanese readers.