-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Partial Sync or re-indexing possible? #472
Comments
I tried to exclude the plugins folder, but for some reason it still takes it into account... |
I actually am trying to figure out a way to do the same. Since my Laravel project is big when it comes to node modules and vendor files it would be great if there was a way to write the frp-deploy-sync.json lets say, after 10 files/folders created on the remote. Or even better, check if those folders already exist in the server before re-creating them. @SamKirkland |
I would like to see this too. Progressive writing to the JSON file would let us pick up where we left off if a long deployment fails partway through. Or maybe on failure, detect it and make sure to write the current progress to the JSON file before exiting. Edit: Duplicate of #341 |
Hey guys! I had the same problem and I have a workaround that can help someone. Letting a script, a txt and a readme which allows to prevent uploading the entire project when it's already uploaded but not synced. I had 2 projects and it worked in both so I hope it works fine for you as well and save you some time and resources. Hope it helps! |
Can you elaborate on it? |
Yes, it's a script which generates a sync state json file to be uploaded to the server and prevent the ftp deploy action not know when the files are already there. It has a readme where it shows detailed steps to use it (including running the script, the CD actions and uploading the json file). Please let me know if that readme is not clear. Unfortunately I hadn't enough time to create a repo or make a PR in this one but that would be the best, I know. |
If I got it correct, the script needs to be uploaded on the target system, right? |
No, no the script, just the resulting json should be uploaded. That's what the FTP action takes into account to compare the repo's files and the files in the target system. |
Bug Description
I have a large WordPress Site with a Woocommerce Shop etc.
Recently it looked like the filestate json was not correct anymore, since deployment always failed (happens from time to time).
As usual I simply deleted the file, so it could be reindexed.
However, the workflow now is always running into the timelimit and fails, since there seem to be too many files.
Is it possible to re-index in another way?
Or is it possible to exclude folders and re-include them one after another, so the index is complete again?
Not sure, if you exclude folders, if they get wiped from the index?
🎉 DeployThe job running on runner GitHub Actions 20 has exceeded the maximum execution time of 360 minutes.
🎉 DeployThe operation was canceled.
The text was updated successfully, but these errors were encountered: