You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am having this issue when trying to run Cluster_Ensembles in a CentOs machine. I have already installed metis and apparently is running.
Do you know what can be giving the error?
INFO: Cluster_Ensembles: cluster_ensembles: due to a rather large number of cells in your data-set, using only 'HyperGraph Partitioning Algorithm' (HGPA) and 'Meta-CLustering Algorithm' (MCLA) as ensemble consensus functions.
*****
INFO: Cluster_Ensembles: HGPA: consensus clustering using HGPA.
#
INFO: Cluster_Ensembles: wgraph: writing wgraph_HGPA.
INFO: Cluster_Ensembles: wgraph: 239847 vertices and 119 non-zero hyper-edges.
#
#
INFO: Cluster_Ensembles: sgraph: calling shmetis for hypergraph partitioning.
Out of netind memory!
Traceback (most recent call last):
File "cluster_ensemble.py", line 42, in <module>
clusterlist = cooperative_cluster(data, feature_method)
File "cluster_ensemble.py", line 22, in cooperative_cluster
consensus_labels = CE.cluster_ensembles(cluster_runs, verbose = True, N_clusters_max = 16)
File "/home/DeepLearning/Pyenv/ontoenv/lib/python3.6/site-packages/Cluster_Ensembles/Cluster_Ensembles.py", line 309, in cluster_ensembles
cluster_ensemble.append(consensus_functions[i](hdf5_file_name, cluster_runs, verbose, N_clusters_max))
File "/home/DeepLearning/Pyenv/ontoenv/lib/python3.6/site-packages/Cluster_Ensembles/Cluster_Ensembles.py", line 657, in HGPA
return hmetis(hdf5_file_name, N_clusters_max)
File "/home/DeepLearning/Pyenv/ontoenv/lib/python3.6/site-packages/Cluster_Ensembles/Cluster_Ensembles.py", line 982, in hmetis
labels = sgraph(N_clusters_max, file_name)
File "/home/DeepLearning/Pyenv/ontoenv/lib/python3.6/site-packages/Cluster_Ensembles/Cluster_Ensembles.py", line 1210, in sgraph
with open(out_name, 'r') as file:
FileNotFoundError: [Errno 2] No such file or directory: 'wgraph_HGPA.part.16'
The text was updated successfully, but these errors were encountered:
`INFO: Cluster_Ensembles: cluster_ensembles: due to a rather large number of cells in your data-set, using only 'HyperGraph Partitioning Algorithm' (HGPA) and 'Meta-CLustering Algorithm' (MCLA) as ensemble consensus functions.
INFO: Cluster_Ensembles: HGPA: consensus clustering using HGPA.
INFO: Cluster_Ensembles: sgraph: calling shmetis for hypergraph partitioning.
Out of netind memory!
Traceback (most recent call last):
File "consensus_clustering.py", line 105, in
roi_labels=ensemble_clustering(working_dir,subjects_filepath,metric,id_roi,k,atlas_name)
File "consensus_clustering.py", line 88, in ensemble_clustering
ensemble_labels = CE.cluster_ensembles(cluster_mat,verbose=True,N_clusters_max=nr_cl)
File "/home/neuroimaging/.local/lib/python3.8/site-packages/Cluster_Ensembles/Cluster_Ensembles.py", line 309, in cluster_ensembles
cluster_ensemble.append(consensus_functions[i](hdf5_file_name, cluster_runs, verbose, N_clusters_max))
File "/home/neuroimaging/.local/lib/python3.8/site-packages/Cluster_Ensembles/Cluster_Ensembles.py", line 657, in HGPA
return hmetis(hdf5_file_name, N_clusters_max)
File "/home/neuroimaging/.local/lib/python3.8/site-packages/Cluster_Ensembles/Cluster_Ensembles.py", line 982, in hmetis
labels = sgraph(N_clusters_max, file_name)
File "/home/neuroimaging/.local/lib/python3.8/site-packages/Cluster_Ensembles/Cluster_Ensembles.py", line 1210, in sgraph
with open(out_name, 'r') as file:
FileNotFoundError: [Errno 2] No such file or directory: 'wgraph_HGPA.part.2`
Hi!
I am having this issue when trying to run Cluster_Ensembles in a CentOs machine. I have already installed metis and apparently is running.
Do you know what can be giving the error?
The text was updated successfully, but these errors were encountered: