How do I copy files that need root access with scp?
I have an Ubuntu server to which I am connecting using SSH.
I need to upload files from my machine into /var/www/ on the server, the files in /var/www/ are owned by root.
Using PuTTY, after I log in, I have to type sudo su and my password first in order to be able to modify files in /var/www/.
But when I am copying files using WinSCP , I can't create create/modify files in /var/www/, because the user I'm connecting with does not have permissions on files in /var/www/ and I can't say sudo su as I do in case of an ssh session.
Do you know how i could deal with this ?
If I was working on my local machine, I would call gksudo nautilus but in this case I only have terminal access to the machine.
12 Answers
You're right, there is no sudo when working with scp. A workaround is to use scp to upload files to a directory where your user has permissions to create files, then log in via ssh and use sudo to move/copy files to their final destination.
scp -r folder/ :/some/folder/you/dont/need/sudo
ssh $ sudo mv /some/folder /some/folder/requiring/perms
# YOU MAY NEED TO CHANGE THE OWNER like:
# sudo chown -R user:user folderAnother solution would be to change permissions/ownership of the directories you uploading the files to, so your non-privileged user is able to write to those directories.
Generally, working in the root account should be an exception, not a rule - the way you phrasing your question makes me think maybe you're abusing it a bit, which in turn leads to problems with permissions - under normal circumstances you don't need super-admin privileges to access your own files.
Technically, you can configure Ubuntu to allow remote login directly as root, but this feature is disabled for a reason, so I would strongly advice you against doing that.
Quick way
From server to local machine:
ssh user@server "sudo cat /etc/dir/file" > /home/user/fileFrom local machine to server:
cat /home/user/file | ssh user@server "sudo tee -a /etc/dir/file" 11 Another method is to copy using tar + ssh instead of scp:
tar -c -C ./my/local/dir \ | ssh "sudo tar -x --no-same-owner -C /var/www" 8 You can also use ansible to accomplish this.
Copy to remote host using ansible's copy module:
ansible -i HOST, -b -m copy -a "src=SRC_FILEPATH dest=DEST_FILEPATH" allFetch from remote host using ansible's fetch module:
ansible -i HOST, -b -m fetch -a "src=SRC_FILEPATH dest=DEST_FILEPATH flat=yes" allNOTE:
- The comma in the
-i HOST,syntax is not a typo. It is the way to use ansible without needing an inventory file. -bcauses the actions on the server to be done as root.-bexpands to--become, and the default--become-useris root, with the default--become-methodbeing sudo.flat=yescopies just the file, doesn't copy whole remote path leading to the file- Using wildcards in the file paths isn't supported by these ansible modules.
- Copying a directory is supported by the
copymodule, but not by thefetchmodule.
Specific Invocation for this Question
Here's an example that is specific and fully specified, assuming the directory on your local host containing the files to be distributed is sourcedir, and that the remote target's hostname is hostname:
cd sourcedir && \
ansible \ --inventory-file hostname, \ --become \ --become-method sudo \ --become-user root \ --module-name copy \ --args "src=. dest=/var/www/" \ allWith the concise invocation being:
cd sourcedir && \
ansible -i hostname, -b -m copy -a "src=. dest=/var/www/" allP.S., I realize that saying "just install this fabulous tool" is kind of a tone-deaf answer. But I've found ansible to be super useful for administering remote servers, so installing it will surely bring you other benefits beyond deploying files.
12May be the best way is to use rsync (Cygwin/cwRsync in Windows) over SSH?
For example, to upload files with owner www-data:
rsync -a --rsync-path="sudo -u www-data rsync" path_to_local_data/ :/var/wwwIn your case, if you need root privileges, command will be like this:
rsync -a --rsync-path="sudo rsync" path_to_local_data/ :/var/wwwSee: scp to remote server with sudo.
1When you run sudo su, any files you create will be owned by root, but it is not possible by default to directly log in as root with ssh or scp. It is also not possible to use sudo with scp, so the files are not usable. Fix this by claiming ownership over your files:
Assuming your user name was dimitri, you could use this command.
sudo chown -R dimitri:dimitri /home/dimitriFrom then on, as mentioned in other answers, the "Ubuntu" way is to use sudo, and not root logins. It is a useful paradigm, with great security advantages.
3If you use the OpenSSH tools instead of PuTTY, you can accomplish this by initiating the scp file transfer on the server with sudo. Make sure you have an sshd daemon running on your local machine. With ssh -R you can give the server a way to contact your machine.
On your machine:
ssh -R 11111:localhost:22 REMOTE_USERNAME@SERVERNAMEIn addition to logging you in on the server, this will forward every connection made on the server's port 11111 to your machine's port 22: the port your sshd is listening on.
On the server, start the file transfer like this:
cd /var/www/
sudo scp -P 11111 -r LOCAL_USERNAME@localhost:FOLDERNAME . Here's a modified version of Willie Wheeler's answer that transfers the file(s) via tar but also supports passing a password to sudo on the remote host.
(stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) \ | ssh remote_host "sudo -S bash -c \"tar -C /var/www/ -xz; echo\""The little bit of extra magic here is the -S option to sudo. From the sudo man page:
-S, --stdin Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. The password must be followed by a newline character.
Now we actually want the output of tar to be piped into ssh and that redirects the stdin of ssh to the stdout of tar, removing any way to pass the password into sudo from the interactive terminal. (We could use sudo's ASKPASS feature on the remote end but that is another story.) We can get the password into sudo though by capturing it in advance and prepending it to the tar output by performing those operations in a subshell and piping the output of the subshell into ssh. This also has the added advantage of not leaving an environment variable containing our password dangling in our interactive shell.
You'll notice I didn't execute 'read' with the -p option to print a prompt. This is because the password prompt from sudo is conveniently passed back to the stderr of our interactive shell via ssh. You might wonder "how is sudo executing given it is running inside ssh to the right of our pipe?" When we execute multiple commands and pipe the output of one into another, the parent shell (the interactive shell in this case) executes each command in the sequence immediately after executing the previous. As each command behind a pipe is executed the parent shell attaches (redirects) the stdout of the left-hand side to the stdin of the right-hand side. Output then becomes input as it passes through processes. We can see this in action by executing the entire command and backgrounding the process group (Ctrl-z) before typing our password, and then viewing the process tree.
$ (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) | ssh
remote_host "sudo -S bash -c \"tar -C /var/www/ -xz; echo\""
[sudo] password for bruce:
[1]+ Stopped ( stty -echo; read passwd; stty echo; echo
$passwd; tar -cz foo.* ) | ssh remote_host "sudo -S bash -c \"tar -C
/var/www/ -xz; echo\""
$ pstree -lap $$
bash,7168 ├─bash,7969 ├─pstree,7972 -lap 7168 └─ssh,7970 remote_host sudo -S bash -c "tar -C /var/www/ -xz; echo"`Our interactive shell is PID 7168, our subshell is PID 7969 and our ssh process is PID 7970.
The only drawback is that read will accept input before sudo has time to send back it's prompt. On a fast connection and fast remote host you won't notice this but you might if either is slow. Any delay will not affect the ability to enter the prompt; it just might appear after you have started typing.
Note I simply added a host file entry for "remote_Host" to my local machine for the demo.
1You may use script I've written being inspired by this topic:
touch /tmp/justtest && scpassudo /tmp/justtest :/tmp/but this requires some crazy stuff (which is btw. automatically done by script)
- server which file is being sent to will no longer ask for password while establishing ssh connection to source computer
- due to necessarility of lack of sudo prompt on the server, sudo will no longer ask for password on remote machine, for user
Here goes the script:
interface=wlan0
if [[ $# -ge 3 ]]; then interface=$3; fi
thisIP=$(ifconfig | grep $interface -b1 | tail -n1 | egrep -o '[0-9.]{4,}' -m1 | head -n 1)
thisUser=$(whoami)
localFilePath=/tmp/justfortest
destIP=192.168.0.2
destUser=silesia
#dest
#destFolderOnRemoteMachine=/opt/glassfish/glassfish/
#destFolderOnRemoteMachine=/tmp/
if [[ $# -eq 0 ]]; then
echo -e "Send file to remote server to locatoin where root permision is needed.\n\tusage: $0 local_filename [username@](ip|host):(remote_folder/|remote_filename) [optionalInterface=wlan0]"
echo -e "Example: \n\ttouch /tmp/justtest &&\n\t $0 /tmp/justtest :/tmp/ "
exit 1
fi
localFilePath=$1
test -e $localFilePath
destString=$2
usernameAndHost=$(echo $destString | cut -f1 -d':')
if [[ "$usernameAndHost" == *"@"* ]]; then
destUser=$(echo $usernameAndHost | cut -f1 -d'@')
destIP=$(echo $usernameAndHost | cut -f2 -d'@')
else
destIP=$usernameAndHost
destUser=$thisUser
fi
destFolderOnRemoteMachine=$(echo $destString | cut -f2 -d':')
set -e #stop script if there is even single error
echo 'First step: we need to be able to execute scp without any user interaction'
echo 'generating public key on machine, which will receive file'
ssh $destUser@$destIP 'test -e ~/.ssh/id_rsa.pub -a -e ~/.ssh/id_rsa || ssh-keygen -t rsa'
echo 'Done'
echo 'Second step: download public key from remote machine to this machine so this machine allows remote machine (this one receiveing file) to login without asking for password'
key=$(ssh $destUser@$destIP 'cat ~/.ssh/id_rsa.pub')
if ! grep "$key" ~/.ssh/authorized_keys; then
echo $key >> ~/.ssh/authorized_keys
echo 'Added key to authorized hosts'
else
echo "Key already exists in authorized keys"
fi
echo "We will want to execute sudo command remotely, which means turning off asking for password"
echo 'This can be done by this tutorial
echo 'This you have to do manually: '
echo -e "execute in new terminal: \n\tssh $destUser:$destIP\nPress enter when ready"
read
echo 'run there sudo visudo'
read
echo 'change '
echo ' %sudo ALL=(ALL:ALL) ALL'
echo 'to'
echo ' %sudo ALL=(ALL:ALL) NOPASSWD: ALL'
echo "After this step you will be done."
read
listOfFiles=$(ssh $destUser@$destIP "sudo ls -a")
if [[ "$listOfFiles" != "" ]]; then
echo "Sending by executing command, in fact, receiving, file on remote machine"
echo 'Note that this command (due to " instead of '', see man bash | less -p''quotes'') is filled with values from local machine'
echo -e "Executing \n\t""identy=~/.ssh/id_rsa; sudo scp -i \$identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"" \non remote machine"
ssh $destUser@$destIP "identy=~/.ssh/id_rsa; sudo scp -i \$identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"
ssh $destUser@$destIP "ls ${destFolderOnRemoteMachine%\\\\n}/$(basename $localFilePath)"
if [[ ! "$?" -eq 0 ]]; then echo "errror in validating"; else echo -e "SUCCESS! Successfully sent\n\t$localFilePath \nto \n\t$destString\nFind more at "; fi
else
echo "something went wrong with executing sudo on remote host, failure"
fi
ENDOFSCRIPT
) | sudo tee /usr/bin/scpassudo && chmod +x /usr/bin/scpassudo 1 You can combine ssh, sudo and e.g tar to transfer files between servers without being able to log in as root and not having the permission to access the files with your user. This is slightly fiddly, so I've written a script to help this. You can find the script here:
or here:
#! /bin/bash
res=0
from=$1
to=$2
shift
shift
files="$@"
if test -z "$from" -o -z "$to" -o -z "$files"
then echo "Usage: $0 (file)*" echo "example: $0 server1 server2 /usr/bin/myapp" exit 1
fi
read -s -p "Enter Password: " sudopassword
echo ""
temp1=$(mktemp)
temp2=$(mktemp)
(echo "$sudopassword";echo "$sudopassword"|ssh $from sudo -S tar c -P -C / $files 2>$temp1)|ssh $to sudo -S tar x -v -P -C / 2>$temp2
sourceres=${PIPESTATUS[0]}
if [ $? -ne 0 -o $sourceres -ne 0 ]
then echo "Failure!" >&2 echo "$from output:" >&2 cat $temp1 >&2 echo "" >&2 echo "$to output:" >&2 cat $temp2 >&2 res=1
fi
rm $temp1 $temp2
exit $res 1 An older question, I know, but times change and so do some techniques. Just in case someone is still looking for a streamlined way to accomplish this.
Assumptions
- Your user on the server is a sudoers.
- You are running Windows 10.
- The files in
/var/wwwshould belong to user:groupwww-data:www-data
Concept
The concept is to combine remote commands over ssh and scp file transfersers without relying on GUIs such as PuTTy or WinSCP. These commands can be run either from a Command Prompt or PowerShell. There are five main tasks to permorm:
- Environment setup
- Transfer files to server
- Set remote file permissions
- Transfer between remote folders
- Cleanup
Tasks 3-5 can be performed in a single step. If you plan to do this often, leaving the environment setup will allow you to omit tasks 1 and 5.
Environment Setup
You may or may not already have folder you can use as a temporary repository for the transfer. If not, you can run:
ssh "mkdir ~/wwwtemp"Depending on your server's settings, you may or may not be prompted for user's password/passphrase to authenticate the ssh session.
Once the session is authenticated, the mkdir ~/wwwtemp command will execute then the ssh session will terminate, and you will be back at your prompt (Command Prompt or PowerShell).
Transfer Files to Server
The next thing to do is to transfer the files from the local Windows machine to the Ubuntu server using scp like so:
scp -R local\path :~/wwwtemp/Depending on your server's authentication method, you may or not need to enter a password/passphrase.
Permissions and Final Destination of Files
Once the file transfer has completed, you can run a series of commands over ssh like thusly:
ssh -t "sudo chown -R www-data:www-data ~/wwwtemp && sudo mv -R ~/wwwtemp/* /var/www/ && sudo rmdir ~/wwwtemp"Again, depending on the authentication method of your server you may or may not be prompted for a password/passphrase. Regardless of your authentication method, sudo will prompt you for user's password. Unless, of course you have disable requirement for password when user runs chown mv and rmdir. See this question for guidance on how to do that.
This step covers tasks 3-5:
sudo chown -R www-data:www-data ~/wwwtemprecursively sets the desired file permissions on the files you just uploaded.sudo mv -R ~/wwwtemp/* /var/www/recursively moves the contents of the temporary repository to its final destination.sudo rmdir ~/wwwtempremoves the temporary repository. It is necessary to usesudohere since we changed the directory owner in task 3.
Of course, && separates each command. The commands will be performed in sequence. If you plan to keep the repository wwwtemp, you can omit the final command in the sequence.
Notes
You can omit && sudo rmdir ~/wwwtemp from the end of the final ssh command string if you would like to continue using the temporary repository in future. Doing so also means that you can omit the first ssh command each time you desire to transfer files to your server in this manner.
$ scp -i example.pem -r sourcefilr.txt ubuntu@10.12.3.4:/example_folderBefore executing this command we need to give full permissions to example_folder:
$ sudo chmod 777 example_folder