Skip to content
This repository has been archived by the owner on Jan 26, 2021. It is now read-only.

lightLDA is killed when traning! #72

Open
edisonsfang opened this issue Jul 27, 2018 · 1 comment
Open

lightLDA is killed when traning! #72

edisonsfang opened this issue Jul 27, 2018 · 1 comment

Comments

@edisonsfang
Copy link

edisonsfang commented Jul 27, 2018

`There are totally 101636 words in the vocabulary

There are maximally totally 99542125 tokens in the data set

The number of tokens in the output block is: 99542125

Local vocab_size for the output block is: 100642

Elapsed seconds for dump blocks: 8.90156

[INFO] [2018-07-27 22:26:29] INFO: block = 0, the number of slice = 1
[INFO] [2018-07-27 22:26:29] Server 0 starts: num_workers=1 endpoint=inproc://server
[INFO] [2018-07-27 22:26:29] Server 0: Worker registratrion completed: workers=1 trainers=1 servers=1
[INFO] [2018-07-27 22:26:29] Rank 0/1: Multiverso initialized successfully.
[INFO] [2018-07-27 22:26:30] Rank 0/1: Begin of configuration and initialization.
[INFO] [2018-07-27 22:26:50] Rank 0/1: End of configration and initialization.
[INFO] [2018-07-27 22:26:50] Rank 0/1: Begin of training.
[DEBUG] [2018-07-27 22:26:50] Request params. start = 0, end = 101635
[INFO] [2018-07-27 22:26:51] Rank = 0, Iter = 0, Block = 0, Slice = 0
Killed
`

o my gad! i am not funny!
the first time to run the example but it was killed and i do not know what the problem is.

@danyang-liu
Copy link

I think it is not the problem of lightlda.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants