Since full escaped backup paths can become longer than the maximum filename size
and hashed filenames cannot be restored it is helpful to have a lookup file for
the user to resolve the hashed path.
Using notrunc will stop the overall truncation of the target file done by sudo.
We need to do this because dd, like other coreutils, already truncates the file
on open(). In case we can't store the backup file afterwards we would end up in
a truncated file for which the user has no write permission by default.
Instead we use a second call of `dd` to perform the necessary truncation
on the command line.
With the fsync option we force the dd process to synchronize the written file
to the underlying device.
Use the same backup write helper function for both periodic
background backups and for temporary backups in safeWrite().
Besides just removing code duplication, this brings the advantages
of both together:
- Temporary backups in safeWrite() now use the same atomic mechanism
when replacing an already existing backup. So that if micro crashes
in the middle of writing the backup in safeWrite(), this corrupted
backup will not overwrite a previous good backup.
- Better error handling for periodic backups.
Micro's logic for periodic backup creation is racy and may cause
spurious backups of unmodified buffers, at least for the following
reasons:
1. When a buffer is closed, its backup is removed by the main goroutine,
without any synchronization with the backup/save goroutine which
creates periodic backups in the background.
A part of the problem here is that the main goroutine removes the
backup before setting b.fini to true, not after it, so the
backup/save goroutine may start creating a new backup even after it
has been removed by the main goroutine. But even if we move the
b.RemoveBackup() call after setting b.fini, it will not solve the
problem, since the backup/save goroutine may have already started
creating a new periodic backup just before b.fini was set to true.
2. When a buffer is successfully saved and thus its backup is removed,
if there was a periodic backup for this buffer requested by the main
goroutine but not saved by the backup/save goroutine yet (i.e. this
request is still pending in backupRequestChan), micro doesn't cancel
this pending request, so a backup is unexpectedly saved a couple of
seconds after the file itself was saved.
Although usually this erroneous backup is removed later, when the
buffer is closed. But if micro terminates abnormally and the buffer
is not properly closed, this backup is not removed. Also if this
issue occurs in combination with the race issue #1 described above,
this backup may not be successfully removed either.
So, to fix these issues:
1. Do the backup removal in the backup/save goroutine (at requests from
the main goroutine), not directly in the main goroutine.
2. Make the communication between these goroutines fully synchronous:
2a. Instead of using the buffered channel backupRequestChan as a storage
for pending requests for periodic backups, let the backup/save
goroutine itself store this information, in the requestesBackups
map. Then, backupRequestChan can be made non-buffered.
2b. Make saveRequestChan a non-buffered channel as well. (There was no
point in making it buffered in the first place, actually.) Once both
channels are non-buffered, the backup/save goroutine receives both
backup and save requests from the main goroutine in exactly the same
order as the main goroutine sends them, so we can guarantee that
saving the buffer will cancel the previous pending backup request
for this buffer.
Various methods of Buffer should be rather methods of SharedBuffer. This
commit doesn't move all of them to SharedBuffer yet, only those that
need to be moved to SharedBuffer in order to be able to request creating
or removing backups in other SharedBuffer methods.
Instead of calculating the hash of the buffer every time Modified() is
called, do that every time b.isModified is updated (i.e. every time the
buffer is modified) and set b.isModified value accordingly.
This change means that the hash will be recalculated every time the user
types or deletes a character. But that is what already happens anyway,
since inserting or deleting characters triggers redrawing the display,
in particular redrawing the status line, which triggers Modified() in
order to show the up-to-date modified/unmodified status in the status
line. And with this change, we will be able to check this status
more than once during a single "handle event & redraw" cycle, while
still recalculating the hash only once.
- extract the open logic into `openFile()` and return a `wrappedFile`
- extract the closing logic into `Close()` and make a method of `wrappedFile`
- rename `writeFile()` into `Write()` and make a method of `wrappedFile`
This allows to use the split parts alone while keeping overwriteFile() as simple
interface to use all in a row.
When saving a file with sudo fails (e.g. if we set `sucmd` to a
non-existent binary, e.g. `set sucmd aaa`), we erroneously return
success instead of the error, as a result we report to the user that
that the file has been successfully saved. Fix it.
When we are saving a file with sudo, if we interrupt sudo via Ctrl-c,
it doesn't just kill sudo, it kills micro itself.
The cause is the same as in the issue #2612 for RunInteractiveShell()
which was fixed by #3357. So fix it the same way as in #3357.
Saving a buffer every time without even checking if it was modified
(i.e. even when the user is not editing the buffer) is wasteful,
especially if the autosave period is set to a short value.
Similarly to the crash fixed by #2967, which happens if sudo failed,
a crash also happens when sudo even fails to start. The reason for
the crash is also similar: nil dereference of screen.Screen caused by
the fact that we do not restore temporarily disabled screen.
To reproduce this crash, set the `sucmd` option to some non-existing
command, e.g. `aaa`, and try to save a file with root privileges.
On modern Linux systems, it can take 30 seconds for
the data to actually hit the disk (check
/proc/sys/vm/dirty_expire_centisecs).
If the computer crashes in those 30 seconds, the user
may end up with an empty file as seen here:
https://github.com/neovim/neovim/issues/9888
This is why editors like vim and nano call
the fsync syscall after they wrote the file.
This syscall is available as file.Sync() in Go.
Running strace against micro shows that fsync is
called as expected:
$ strace -f -p $(pgrep micro) -e fsync
strace: Process 3284344 attached with 9 threads
[pid 3284351] fsync(8) = 0
Also, we now catch errors returned from w.Flush().