Replies: 2 comments 2 replies
-
Hi SuFong,
Also we have just tested the qlstm code and it should run normally now. Please have a try! |
Beta Was this translation helpful? Give feedback.
2 replies
-
The performance difference between old version and new version need further check @Hanrui-Wang |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When I tried the example shown in POS_tagging.ipynb, I found the problem for the quantum part simulation as below either in Ubuntu or Window BUT no problem to run classical LSTM:
Traceback (most recent call last):
File "torchtest.py", line 358, in
history_quantum = train(model_quantum, n_epochs)
File "torchtest.py", line 307, in train
tag_scores = model(sentence_in)
File "/root/anaconda3/envs/torch_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "torchtest.py", line 268, in forward
lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))
File "/root/anaconda3/envs/torch_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "torchtest.py", line 209, in forward
f_t = torch.sigmoid(self.clayer_out(self.VQC'forget')) # forget block
File "/root/anaconda3/envs/torch_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "torchtest.py", line 44, in forward
self.encoder(self.q_device, x)
File "/root/anaconda3/envs/torch_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/c/torchquantum/torchquantum/graph.py", line 25, in forward_register_graph
res = f(*args, **kwargs)
File "/mnt/c/torchquantum/torchquantum/encoding.py", line 66, in forward
func_name_dict[info["func"]](
File "/mnt/c/torchquantum/torchquantum/functional.py", line 1799, in rx
gate_wrapper(
File "/mnt/c/torchquantum/torchquantum/functional.py", line 326, in gate_wrapper
q_device.states = apply_unitary_bmm(state, matrix, wires)
File "/mnt/c/torchquantum/torchquantum/functional.py", line 202, in apply_unitary_bmm
new_state = mat.bmm(permuted)
### RuntimeError: Expected size for first two dimensions of batch2 tensor to be: [5, 2] but got: [1, 2].
Why the Qlayer input, Qlayer update, and Qplay output are different from the Qforget in which the self.encoder = tq.GeneralEncoder(
[ {'input_idx': [0], 'func': 'rx', 'wires': [0]},
{'input_idx': [1], 'func': 'rx', 'wires': [1]},
{'input_idx': [2], 'func': 'rx', 'wires': [2]},
{'input_idx': [3], 'func': 'rx', 'wires': [3]},
])
'input_idx': [0] -> wires: [0], [1] to [1], [2] to [2], [3] to [3]? (It is 'input_idx': [0] -> wires: [0], [1] to [1], [2] to [2], [3] to [2]) for other gates.
It is the example of 4 input so that we need 4 qubits. If my input is 6 sets of sequential data then I have to modify the codes to
self.encoder = tq.GeneralEncoder(
[ {'input_idx': [0], 'func': 'rx', 'wires': [0]},
{'input_idx': [1], 'func': 'rx', 'wires': [1]},
{'input_idx': [2], 'func': 'rx', 'wires': [2]},
{'input_idx': [3], 'func': 'rx', 'wires': [...]},
{'input_idx': [4], 'func': 'rx', 'wires': [...]},
{'input_idx': [5], 'func': 'rx', 'wires': [...]},
])
Am I right? If so, for all the gates mentioned above?
If the original codes have no typos, what indices should I put for the
{'input_idx': [3], 'func': 'rx', 'wires': [???]},
{'input_idx': [4], 'func': 'rx', 'wires': [???]},
{'input_idx': [5], 'func': 'rx', 'wires': [???]},
Appreciate if you can explain the questions above. Thank you very much.
Beta Was this translation helpful? Give feedback.
All reactions