Accidentally overwrote a binary file on Linux? Here is how to restore it

Accidentally overwrote a binary file on Linux? Here is how to restore it

http://ift.tt/2qgqSgq

Accidentally overwrote a binary file on Linux? Here is how to restore it

Posted on in Categories Command Line Hacks last updated May 23, 2017

A shell script went wild due to some bug, and the script overwrote a binary file /bin/ping. Here is how tor restores it.

/bin/ping erased
/bin/ping erased (Credit: http://ift.tt/1l0OTWM)


There are two ways to solve this problem.

Easy way: copy it from another server

Just scp file from another box running the same version of your Linux distribution:
$ sudo scp [email protected]:/bin/ping /bin/ping

Proper sysadmin way: Search and reinstall package

First, query the package which provides FILE /bin/ping as per your Linux distro:

Debian/Ubuntu Linux user type

$ dpkg -S /bin/ping
iputils-ping: /bin/ping

Now just reinstall the iputils-ping package using apt-get command or apt command:
$ sudo apt-get --reinstall install iputils-ping

RHEL/SL/Scientific/Oracle Linux user type

$ yum provides /bin/ping
iputils-20071127-24.el6.x86_64 : Network monitoring tools including ping

Now just reinstall the iputils package using yum command:
$ sudo yum reinstall iputils

Fedora Linux user type

$ dnf provides /bin/ping
iputils-20161105-1.fc25.x86_64 : Network monitoring tools including ping

Now just reinstall the iputils package using dnf command:
$ sudo dnf reinstall iputils

Arch Linux user type

$ pacman -Qo /bin/ping
/usr/bin/ping is owned by iputils 20161105.1f2bb12-2

Now just reinstall the iputils package using pacman command:
$ sudo pacman -S iputils

Suse/OpenSUSE Linux user type

$ zypper search -f /bin/ping
Sample outputs:

Loading repository data...
Reading installed packages...

S | Name    | Summary                            | Type   
--+---------+------------------------------------+--------
  | ctdb    | Clustered TDB                      | package
i | iputils | IPv4 and IPv6 Networking Utilities | package
  | pingus  | Free Lemmings-like puzzle game     | package

Now just reinstall the iputils package using zypper command:
$ sudo zypper -S iputils

What can be done to avoid such problem in future?

Testing in a sandbox is an excellent way to prevent such problem. Care must be taken to make sure that variable has value. The following is dangerous:
echo "foo" > $file
Maybe something like as follows would help (see “If Variable Is Not Defined, Set Default Variable“)
file="${1:-/tmp/file.txt}"
echo "foo" > $file

Another option is to stop if variable is not defined:
${Variable?Error \$Variable is not defined}

Linux

Linux

via [RSS/Feed] nixCraft: Linux Tips, Hacks, Tutorials, And Ideas In Blog Format http://ift.tt/Awfi7s

May 23, 2017 at 02:27PM

Say Hello to the Slimbook Pro, a 13-inch Linux Laptop

Say Hello to the Slimbook Pro, a 13-inch Linux Laptop

http://ift.tt/2qMtB4v

Spanish hardware company Slimbook is on a roll.

It already caters to the needs of KDE enthusiasts, and recently it unsheathed the impressive aluminium 15-inch Slimbook Excalibur.

Today the company takes the shrink wrap off of yet another Linux powered laptop.

Say hello to the Slimbook Pro.

Slimbook Pro Specs & Price

With an aluminium body, lightweight build and 13.1-inch display the Slimbook Pro is plays smaller sibling to the 15.6-inch Slimbook Excalibur.

The base model ship with a standard FHD (1920 x 1080) panel but for a mere €49 more you can upgrade this to a QHD+ (3200 x 1800) HiDPI display.

Canonical is sponsoring a HiDPI hackfest for GNOME right now, so if you opt to go HiDPI you can expect to see various improvements to the Ubuntu HiDPI experience is future releases.

Inside there’s a choice of 7th Gen Intel ‘Kaby Lake’ processors:

  • Intel i3-7100U @ 2.4GHz
  • Intel i5-7200U @ 2.5GHz
  • Intel i7-7500U @ 2.7GHz

The integrated graphics of Kaby Lake won’t handle top-tier gaming titles at max frame rates but is perfectly adequate for most needs.

All models come with 4GB RAM as standard (8GB and 16GB upgrades available). Base storage is 120GB SSD, with a variety of upgrades and second hard disk options available at extra cost — yup, there’s enough space inside for 2 hard drives.

You can also elect to kit your Slimbook Pro out with Intel Dual Band 8265AC WiFi, which apparently has “better signal and stability in the latest Linux kernels“.

 

As a portable laptop and not a portable workstation power consumption is a key consideration. Improvements in the Linux Kernel combined with the lower power consumption of Intel’s Kaby Lake processors means you should expect to a decent amount of battery life from the Pro (for comparison, the 13.1-inch System76 Galago Pro manages a terse 4 hours max) but you should be able to eke a bit more out of the Slimbook Pro.

Ports wise the laptop has all you could ask, including

  • Full-size HDMI out
  • Mini Display Port
  • 2x USB 3.1
  • 1x USB Type-C
  • SD card slot
  • Ethernet RJ45 jack
  • Courage jack
  • Mic jack

Along with a full-size backlit keyboard in either American or Spanish layouts, the Slimbook Pro also uses a Synaptics touchpad.

 

The Slimbook Pro price starts at €699 for the base Intel i3 model.

There is no denying that, yes, with smaller computer companies selling Linux products you do tend to pay a little bit more for the privilege — but that’s economies of scale for you; you can’t expect to pay own-brand label prices for what is, in effect, organic produce.

Linux

Linux

via OMG! Ubuntu! http://ift.tt/eCozVa

May 23, 2017 at 05:55PM

A Brief Look at the Roots of Linux Containers

A Brief Look at the Roots of Linux Containers

http://ift.tt/2q2SBFX

In previous excerpts of the new, self-paced Containers Fundamentals course from The Linux Foundation, we discussed what containers are and are not. Here, we’ll take a brief look at the history of containers, which includes chroot, FreeBSD jails, Solaris zones, and systemd-nspawn. 

Chroot was first introduced in 1979, during development of Seventh Edition Unix (also called Version 7), and was added to BSD in 1982. In 2000, FreeBSD extended chroot to FreeBSD Jails. Then, in the early 2000s, Solaris introduced the concept of zones, which virtualized the operating system services.

With chroot, you can change the apparent root directory for the currently running process and its children. After configuring chroot, subsequent commands will run with respect to the new root (/). With chroot, we can limit the processes only at the filesystem level, but they share the resources, like users, hostname, IP address, etc. FreeBSD Jails extended the chroot model by virtualizing users, network sub-systems, etc.

systemd-nspawn has not been around as long as chroot and Jails, but it can be used to create containers, which would be managed by systemd. On modern Linux operating systems, systemd is used as an init system to bootstrap the user space and manage all the processes subsequently.

This training course, presented mainly in video format, is aimed at those who are new to containers and covers the basics of container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more.

You can learn more in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!

Linux

Linux

via http://ift.tt/1Wf4iBh

May 22, 2017 at 09:58AM

WannaCrypt makes an easy case for Linux

WannaCrypt makes an easy case for Linux

http://ift.tt/2pVIkeQ

linuxhero.jpg

Image: Jack Wallen

Ransomware is on the rise. On a single day, WannaCrypt held hostage over 57,000 users worldwide, demanding anywhere between $300-$600 in Bitcoin. Don’t pay up and you’ll not be seeing your data again. Before I get into the thrust of this piece, if anything, let WannaCrypt be a siren call to everyone to backup your data. Period. End of story. With a solid data backup, should you fall prey to ransomware, you are just an OS reinstall and a data restore away from getting back to work.

That being said, if there was ever a time for Linux to shine on the desktop, it’s now. I know, I know. Eyes are being rolled and cries of “This again?” are bouncing across the whole of the internet.

Hear me out.

This particular ransomware was nasty; not just in scope, but in design. Consider this:

  • WannaCrypt possesses the capability to spread itself
  • WannaCrypt exploits a known vulnerability in Windows
  • WannaCrypt uses the SMB protocol which is often unfiltered within corporate networks
  • The tools behind WannaCrypt (EternalBlue and DoublePulsar) originated within the NSA
  • Computers in 150 countries were affected (including machines within FedEx, Renault, Telefonica, as well as hospital computer systems across Europe)

The above knowledge (and more) can be found reported just about anywhere (as well as the story behind the man who stopped new infections). The thing is, WannaCrypt isn’t the first of its kind. In fact, ransomware has been exploiting Windows vulnerabilities for a while. The first known ransomware attack was called “AIDS Trojan” that infected Windows machines back in 1989. This particular ransomware attack switched the autoexec.bat file. This new file counted the amount of times a machine had been booted; when the machine reached a count of 90, all of the filenames on the C drive were encrypted.

SEE: Patching WannaCrypt: Dispatches from the frontline

Windows, of course, isn’t the only platform to have been hit by ransomware. In fact, back in 2015, the LinuxEncoder ransomware was discovered. That bit of malicious code, however, only affected servers running the Magento ecommerce solution.

The important question here is this: Have their been any ransomware attacks on the Linux desktop? The answer is no.

With that in mind, it’s pretty easy to draw the conclusion that now would be a great time to start deploying Linux on the desktop.

But, but, but!

I can already hear the tired arguments. The primary issue: software. I will counter that argument by saying this: Most software has migrated to either Software as a Service (SaaS) or the cloud. The majority of work people do is via a web browser. Chrome, Firefox, Edge, Safari; with few exceptions, SaaS doesn’t care. With that in mind, why would you want your employees and staff using a vulnerable system?

Consider this: If you have an employee that works a crucial position out in the field and you provide their transportation, would you have them driving a vehicle with a known issue? Say, you know the vehicle has a cracked engine block or frame and could, at any minute, suffer catastrophic failure. That failure could (at best) be the cause of the employee losing a day’s work and (at worst) endanger that employee’s life.

Would you willingly send that employee out in the vehicle? No, you wouldn’t.

Apply that same analogy to your staff computers. Why would you willingly expect them to work with a platform that has suffered from vulnerabilities known to lead to such exploits as WannaCrypt; vulnerabilities that (at best) cause said employee to lose a day’s work and (at worst) dox said employee or negatively impact your bottom line? The difference here is that you would be (and are) willing to deploy systems that are a malformed URL away from compromise.

SEE: Why patching Windows XP forever won’t stop the next WannaCrypt

Nothing is perfect

Don’t get me wrong, I’m not saying Linux is perfect. Any system connected to a network can fall victim to something. But the truth of the matter is, by design, Linux is far less susceptible to the likes of WannaCrypt than is Windows. How do I know this? I’ve been using Linux as my only operating system (on servers and desktops) since 1997 and have only encountered one instance of malicious code (a rootkit on a poorly administered mail server). Those are some pretty good odds there.

Imagine, if you will, you have deployed Linux as a desktop OS for your company and those machines work like champs from the day you set them up to the day the hardware finally fails. Doesn’t that sound like a win your company could use? If your employees work primarily with SaaS (through web browsers), then there is zero reason keeping you from making the switch to a more reliable, secure platform.

Don’t fear change

I get it; I really do. From top to bottom, people fear change. But this fear has been assuaged with users working primarily within a tool that holds a significant amount of universality. I’m talking about the web browser; a piece of software that anyone can use (with ease) regardless of platform. Every browser (Chrome, Firefox, Edge, Safari, etc.) functions in similar fashion, no matter the underlying operating system. That, in and of itself, has placed platform in the shadows. So unless your company depends upon a proprietary software system that was designed for (and only runs in) Windows, not making the move to Linux desktops is inviting trouble.

Make the switch and avoid the likes of WannaCrypt.

Also see

Linux

Linux

via LXer Linux News http://lxer.com/

May 20, 2017 at 07:26AM

How to Install Etcher, the open-source USB writer tool, on Ubuntu

How to Install Etcher, the open-source USB writer tool, on Ubuntu

http://ift.tt/2pJB28R

etcher image writer on ubuntu

Etcher, a popular open-source USB image writer tool for Windows, macOS and Linux, has just issued a new stable release.

Version 1.0 arrives almost one year to the day since we first introduced you to the easy-to-use image writer tool on this site. The stable release sees the app pick up a boat load of improvements that, its developers say, help make it “a much more stable and reliable tool”.

A Recap of Etcher Features

etcher usb selection

The company steering development of the app say over the course of the various beta releases Etcher was used to write over one million images to SD cards & USB drives.

Built using the Electron format, Etcher is a true cross-platform app that can write .iso, .img and .zip files to USB drives and SD cards.

The main interface is dead simple to use: you select an image, select a drive (the built in drive picker is designed to avoid you making mistakes and overwriting a hard drive, etc) and hit Flash. Validated burning double-checks images after writing so that you’re left faffing about trying to boot from a dud drive.

No sign of some previously planned features, like support for creating multi-boot USB sticks, or enabling persistent storage on Ubuntu images.

  • Support for creating multi-boot USBs
  • Support for persistent storage on Ubuntu images
  • Registered Etcher as handler for *.img and *.iso files

It’s not just the GUI client that’s gotten an update though. Etcher 1.0 also sees the first experimental release of the Etcher CLI.

The Etcher CLI lets you to write images and validate flashes from the command line. As it doesn’t rely on the Electron framework, it’s a smaller download and install size. Its developers also tout the ability for users to write custom scripts using the CLI to “perform tasks such as multi-writes.”

How to install Etcher on Ubuntu

Etcher 1.0 is available to download for Windows, macOS and Linux from the Etcher.io website or from its Github page:

Etcher on Github

Linux builds are provided in the AppImage package format. App Images are self-contained runtimes that do not require manual installation or root (but does require you to give it the necessary permissions to run as a programme). They will run on pretty much every distro out there — just download, and double-click to run:

etcher appimage

If you prefer to install your apps in a more traditional way, using apt, you can install Etcher on Ubuntu from the Etcher repository.

Now, getting this set up is a little bit more involved that with a regular PPA, but this methods the benefit of ensuring you get all future Etcher updates automatically through your update manager.

To add the Etcher repo open the Software & Updates app using the Unity Dash (or an alternative app launcher):

Select the ‘Other Software’ tab in Software & Updates [1]

adding repo software sources on ubuntu

Click ‘Add’ [2] and paste the following the entry field of the box that appears:

deb http://ift.tt/2qE7Sgv stable etcher

Click ‘Add Source’ [3] to confirm the change, then close Software & Updates. You’ll likely be prompted to update your software sources.

The next step is to add the repository key. This allows Ubuntu to verify that packages installed from the repository are made by who they say they are. You have to add this key to be able to install Etcher; Ubuntu will disable unsigned repos.

Open a new Terminal window, paste the following command, and then hit return/enter:

sudo apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 379CE192D401AB61

Finally, update your packages list and install the app:

sudo apt update && sudo apt install etcher-electron

That’s it; launch Etcher from the Unity Dash (or an alternative app launcher) and follow the on-screen instructions.

Linux

Linux

via OMG! Ubuntu! http://ift.tt/eCozVa

May 13, 2017 at 02:16PM

4 terminal applications with great command-line UIs

4 terminal applications with great command-line UIs

http://ift.tt/2pWXiOl

In this article, I’ll look at a shortcoming of command-line interfaces—discoverability—and a few ways to overcome this problem.

I love command lines. My first command line was DOS 6.2, back in 1997. I learned the syntax for various commands and showed off how to list hidden files in a directory (attrib). I would carefully craft my commands one character at a time. When I made a mistake, I would proceed to retype the command from the beginning. One fine day someone showed me how to traverse the history using the up and down arrow keys and I was blown away.

Programming and development

Later when I was introduced to Linux, I was pleasantly surprised that up and down arrows retained their ability to traverse the history. I was still typing each character meticulously, but by now I knew how to touch type and I was doing exceedingly well with my 55 words per minute. Then someone showed me tab-completion and changed my life once again.

In GUI applications menus, tool tips and icons are used to advertise a feature for the user. Command lines lack that ability, but there are ways to overcome this problem. Before diving into solutions, I’ll look at a couple of problematic CLI apps:

1. MySQL

First we have our beloved MySQL REPL. I often find myself typing SELECT * FROM and then press Tab out of habit. MySQL asks whether I’d like to see all 871 possibilities. I most definitely don’t have 871 tables in my database. If I said yes, it shows a bunch of SQL keywords, tables, functions, and so on.

MySQL gif

2. Python

Let’s look at another example, the standard Python REPL. I start typing a command and press the Tab key out of habit. Lo and behold a Tab character is inserted, which is a problem considering that a Tab character has no business in a Python source code.

Python gif

Good UX

Now let’s look at well-designed CLI programs and how they overcome some discoverability problems.

Auto-completion: bpython

Bpython is a fancy replacement for the Python REPL. When I launch bpython and start typing, suggestions appear right away. I haven’t triggered them via a special key combo, not even the famed Tab key.

bpython gif

When I press the Tab key out of habit, it completes the first suggestion from the list. This is a great example of bringing discoverability to CLI design.

The next aspect of bpython is the way it surfaces documentation for modules and functions. When I type in the name of a function, it presents the function signature and the doc string attached with the function. What an incredibly thoughtful design.

Context-aware completion: mycli

Mycli is a modern alternative to the default MySQL client. This tool does to MySQL what bpython does to the standard Python REPL. Mycli will auto-complete keywords, table names, columns, and functions as you type them.

The completion suggestions are context-sensitive. For example, after the SELECT * FROM, only tables from the current database are listed in the completion, rather than every possible keyword under the sun.

mycli gif

Fuzzy search and online Help: pgcli

If you’re looking for a PostgreSQL version of mycli, check out pgcli. As with mycli, context-aware auto-completion is presented. The items in the menu are narrowed down using fuzzy search. Fuzzy search allows users to type sub-strings from different parts of the whole string to try and find the right match.

pgcli gif

Both pgcli and mycli implement this feature in their CLI. Documentation for slash commands are presented as part of the completion menu.

Discoverability: fish

In traditional Unix shells (Bash, zsh, etc.), there is a way to search your history. This search mode is triggered by Ctrl-R. This is an incredibly useful tool for recalling a command you ran last week that starts with, for example, ssh or docker. Once you know this feature, you’ll find yourself using it often.

If this feature is so useful, why not do this search all the time? That’s exactly what the fish shell does. As soon as you start typing a command, fish will start suggesting commands from history that are similar to the one you’re typing. You can then press the right arrow key to accept that suggestion.

Command-line etiquette

I’ve reviewed innovative ways to solve the discoverability problems, but there are command-line basics everyone should implement as part of the basic REPL functionality:

  • Make sure the REPL has a history that can be recalled via the arrow keys. Make sure the history persists between sessions.
  • Provide a way to edit the command in an editor. No matter how awesome your completions are, sometimes users just need an editor to craft that perfect command to drop all the tables in production.
  • Use a pager to pipe the output. Don’t make the user scroll through their terminal. Oh, and use sane defaults for your pager. (Add the option to handle color codes.)
  • Provide a way to search the history either via the Ctrl-R interface or the fish-style auto-search.

Conclusion

In part 2, I’ll look at specific libraries in Python that allow you to implement these techniques. In the meantime, check out some of these well-designed command-line applications:

  • bpython or ptpython: Fancy REPL for Python with auto-completion support.
  • http-prompt: An interactive HTTP client.
  • mycli: A command-line interface for MySQL, MariaDB, and Percona with auto-completion and syntax highlighting.
  • pgcli: An alternative to psql with auto-completion and syntax-highlighting.
  • wharfee: A shell for managing Docker containers.

Learn more in Amjith Ramanujam’s  PyCon US 2017 talk, Awesome Commandline Tools, May 20th in Portland, Oregon.

Python,Linux

via Opensource.com

May 9, 2017 at 06:33PM

4 terminal applications with great command-line UIs

4 terminal applications with great command-line UIs

http://ift.tt/2pWXiOl

4 awesome command-line tools

In this article, I’ll look at a shortcoming of command-line interfaces—discoverability—and a few ways to overcome this problem.

I love command lines. My first command line was DOS 6.2, back in 1997. I learned the syntax for various commands and showed off how to list hidden files in a directory (attrib). I would carefully craft my commands one character at a time. When I made a mistake, I would proceed to retype the command from the beginning. One fine day someone showed me how to traverse the history using the up and down arrow keys and I was blown away.

read more

Linux,Gray Matters,Python

Linux

via Opensource.com http://ift.tt/1EBSQUh

May 8, 2017 at 03:33AM

10 More Quick Tips to Make Linux Networking Easier

10 More Quick Tips to Make Linux Networking Easier

http://ift.tt/2pYfpn5

If you either work on a Linux desktop, or administer a Linux server, there might be times when frustration sets in over networking issues. Although Linux has made significant advances over the years, there are still instances where the standard troubleshooting or optimizations won’t work. To that end, you need to have some tricks and tips up your sleeve to make your life easier.

Linux

Linux

via http://ift.tt/1Wf4iBh

May 8, 2017 at 10:18AM

How to run command or code in parallel in bash shell under Linux or Unix

How to run command or code in parallel in bash shell under Linux or Unix

http://ift.tt/2q8BulH

H

ow do I run commands in parallel in a bash shell script running under Linux or Unix-like operating system? How can I run multiple programs in parallel from a bash script?

You have various options to run programs or commands in parallel:

=> Use GNU/parallel or xargs command.

=> Use wait built-in command with &.

=> Use xargs command.

How to run multiple programs in parallel from a bash script in linux / unix?

Putting jobs in background

The syntax is:
command &
command arg1 arg2 &
custom_function &

OR
prog1 &
prog2 &
wait
prog3

In above code sample, prog1, and prog2 would be started in the background, and the shell would wait until those are completed before starting the next program named progr3.

Examples

In this following example run sleep command in the background:
$ sleep 60 &
$ sleep 90 &
$ sleep 120 &

To displays status of jobs in the current shell session run jobs command as follows:
$ jobs
Sample outputs:

[1]   Running                 sleep 60 &
[2]-  Running                 sleep 90 &
[3]+  Running                 sleep 120 &

Let us write a simple bash shell script:

#!/bin/bash
# Our custom function
cust_func(){
  echo "Do something $1 times..."
  sleep 1
}
# For loop 5 times
for i in {1..5}
do
	cust_func $i & # Put a function in the background
done
 
## Put all cust_func in the background and bash 
## would wait until those are completed 
## before displaying all done message
wait 
echo "All done"

#!/bin/bash
# Our custom function
cust_func(){
echo “Do something $1 times…”
sleep 1
}
# For loop 5 times
for i in {1..5}
do
cust_func $i & # Put a function in the background
done## Put all cust_func in the background and bash
## would wait until those are completed
## before displaying all done message
wait
echo “All done”

Let us say you have a text file as follows:
$ cat list.txt
Sample outputs:

http://ift.tt/2piFZq2
http://ift.tt/2pKEy69
http://ift.tt/2pj1KWo
http://ift.tt/2pKCco9
http://ift.tt/2piQTMf
http://ift.tt/2pKvOgu
http://ift.tt/2piKKjd
http://ift.tt/2pKAL96
http://ift.tt/2piLzIT
http://ift.tt/2pKlQfe
http://ift.tt/2piXVAI
http://ift.tt/2pKvtKT

To download all files in parallel using wget:

#!/bin/bash
# Our custom function
cust_func(){
  wget -q "$1"
}
 
while IFS= read -r url
do
        cust_func "$url" &
done < list.txt
 
wait
echo "All files are downloaded."

#!/bin/bash
# Our custom function
cust_func(){
wget -q “$1”
}while IFS= read -r url
do
cust_func “$url” &
done < list.txtwait
echo “All files are downloaded.”

GNU parallel examples

From the GNU project site:

GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables.

The syntax is pretty simple:
parallel ::: prog1 prog2
For example, you can find all *.doc files and gzip (compress) it using the following syntax:
$ find . -type f -name '*.doc' | parallel gzip --best
$ find . -type f -name '*.doc.gz'

Our above wget example can be simplified using GNU parallel as follows:
$ cat list.txt | parallel -j 4 wget -q {}
OR
$ parallel -j 4 wget -q {} < list.txt

See also

Linux

Linux

via [RSS/Feed] nixCraft: Linux Tips, Hacks, Tutorials, And Ideas In Blog Format http://ift.tt/Awfi7s

May 5, 2017 at 06:02PM