Stats Say Linux Marketshare Hit All-Time High Last Month

Stats Say Linux Marketshare Hit All-Time High Last Month

http://ift.tt/2tEPsg5

linux marketshare june 2017Desktop Linux marketshare hit an all-time high last month, according to the latest data from web analytics firm NetMarketShare.

The company report that Linux users made up 2.36% of tracked visits to websites it tracks last month, the highest the Linux figure has ever been.

Not that this uptick is surprising. It continues a trend we’ve seen over the past 12 months which has (more of less) seen Linux usage rank firmly above the 2% line on NetMarketShare.

More impressively, the latest figures means Linux usage is roughly a third of that of Apple macOS, which sits at 6.12% in the June 2017 rankings, down on the previous month.

The combined flavors of Microsoft Windows (who else?) continue to eat up the lion’s share of the desktop operating system chart, falling to 91.51% during the same period.

Caution: Caution Advised

‘As we always say when we present stats like this: take ’em with a large pinch of Na. ‘

As we always say when we present stats like this: make sure you take ’em with a large pinch of Na.

Why?

It’s because statistics, numbers and reporting methods not only vary between competing analytics companies but are also open to interpretation, debate and potential errors.

Furthermore NetMarketShare accrues its data based on visits to a mere 40,000 websites globally. While 40,000 is a largish sample size it’s also ludicrously small when compared against the number of websites that are out there!

Finally, while the Linux figure reported does exclude Android/Linux it does include ChromeOS/Linux in addition to GNU/Linux, leading some to attribute the rise in Linux marketshare to Google’s Chrome OS.

Linux

Linux

via OMG! Ubuntu! http://ift.tt/eCozVa

July 3, 2017 at 07:12PM

Linux is Running on Almost All of the Top 500 Supercomputers

Linux is Running on Almost All of the Top 500 Supercomputers

http://ift.tt/2rVAK0R

Linux is Running on Almost All of the Top 500 Supercomputers

Linux rules supercomputers

Brief: Linux may not have a decent market share in desktop, but it rules the supercomputers with 498 out of the top 500 supercomputers running on Linux.

Linux is still running on more than 99% of the top 500 fastest supercomputers in the world. Same as last year, 498 out of top 500 supercomputers run Linux while remaining 2 run Unix.

No supercomputer dared to run Windows (pun intended). And of course, no supercomputer runs macOS because Apple has not manufactured the ‘iSupercomputer’ yet.

This information is collected by an independent organization Top500 that publishes the details about the top 500 fastest supercomputers known to them, twice a year. You can go the website and filter out the list based on country, OS type used, vendors etc.Don’t worry, I’ll do it for you to present some of the most interesting facts from this year’s list.

No worries if you don’t want to do that because I’ll present some of the interesting facts here.

Linux rules supercomputers because it is open source

20 years back, most of the supercomputers ran Unix. But eventually, Linux took the lead and become the preferred choice of operating system for the supercomputers.

Growth of Linux on SupercomputersGrowth of Linux on Supercomputers. Image credit: ZDNet

The main reason for this growth is the open source nature of Linux. Supercomputers are specific devices built for specific purposes. This requires a custom operating system optimized for those specific needs.

Unix, being a closed source and propriety operating system, is an expensive deal when it comes to customization. Linux, on the other hand, is free and easier to customize. Engineering teams can easily customize a Linux-based operating system for each of the supercomputers.

However, I wonder why open source variants such as FreeBSD failed to gain popularity on supercomputers.

To summarize the list of top 500 supercomputers based on OS this year:

  • Linux: 498
  • Unix: 2
  • Windows: 0
  • MacOS: 0

To give you a year wise summary of Linux shares on the top 500 supercomputers:

  • In 2012: 94%
  • In 2013: 95%
  • In 2014: 97%
  • In 2015: 97.2%
  • In 2016: 99.6%
  • In 2017: 99.6
  • In 2018: ???

The only two supercomputers running Unix are ranked 493rd and 494th:

Top 500 Supercomputers running UnixSupercomputers running Unix

Some other interesting stats about fastest supercomputers

Top 10 Fastest Supercomputers in 2017Top 10 Fastest Supercomputers in 2017

Moving Linux aside, here are some other interesting stats about supercomputers this year:

  • World’s fastest supercomputer Sunway TaihuLight is based in National Supercomputing Center in Wuxi, China. It has a speed of 93PFLOPS.
  • World’s second fastest supercomputer is also based in China (Tianhe-2) while the third spot is taken by Switzerland-based Piz Daint.
  • Out of the top 10 fastest supercomputers, USA has 5, Japan and China have 2 each while Switzerland has 1.
  • United Staes leads with 168 supercomputers in the list followed by China with 160 supercomputers.
  • Japan has 33, Germany has 28, France has 18, Saudi Arabia has 6, India has 4 and Russia has 3 supercomputers in the list.

Some interesting facts, isn’t it? You can filter out your own list here to further details.

While you are reading it, do share this article on social media. It’s an achievement for Linux and we got to show off 😀

Linux

Linux

via LXer Linux News http://lxer.com/

June 27, 2017 at 05:01AM

A quick guide to using FFmpeg to convert media files

A quick guide to using FFmpeg to convert media files

http://ift.tt/2qRJBUI

A quick guide to using FFmpeg to convert media files

There are many open source tools out there for editing, tweaking, and converting multimedia into exactly what you need. Tools like Audacity or Handbrake are fantastic, but sometimes you just want to change a file from one format into another quickly. Enter FFmpeg.

read more

Linux

Linux

via Opensource.com http://ift.tt/1EBSQUh

June 5, 2017 at 03:14AM

How to configure Nginx SSL/TLS passthrough with TCP load balancing

How to configure Nginx SSL/TLS passthrough with TCP load balancing

http://ift.tt/2s04gG9

How do I configure SSL/TLS pass through on Nginx load balancer running on Linux or Unix-like system? How do I load balance TCP traffic and setup SSL Passthrough to pass SSL traffic received at the load balancer onto the backend web servers?

Linux

Linux

via [RSS/Feed] nixCraft: Linux Tips, Hacks, Tutorials, And Ideas In Blog Format http://ift.tt/Awfi7s

June 6, 2017 at 10:03AM

Learn the Secrets of Building a Business with Open Source

Learn the Secrets of Building a Business with Open Source

http://ift.tt/2sKPgZ2

Today, if you’re building a new product or service, open source software is likely playing a role. But many entrepreneurs and product managers still struggle with how to build a successful business purely on open source.                    

The big secret of a successful open source business is that “it’s about way more than the code,” says John Mark Walker, a well-known voice in the open source world with extensive expertise in open source product, community, and ecosystem creation at Red Hat and Dell EMC.  “In order to build a certified, predictable, manageable product that ‘just works,’ it requires a lot more effort than just writing good code.”

It requires a solid understanding of open source business models and the expertise and management skills to take advantage of developing your products in an open source way.

In a new eBook, Building a Business on Open Source, The Linux Foundation has partnered with Walker to distill what it takes to create and manage a product or service built on open source software.  It starts with an overview of the various business models, then covers the business value of the open source platform itself, and describes how to create a successful open source product and manage the software supply chain behind it.  

“If you’re developing software in an open source way, you have options that proprietary developers don’t have,” Walker writes. “You can deliver better software more efficiently that is more responsive to customer needs — if you do it well and apply best practices.”

The Value of the Open Source Platform            

As open source has become more prevalent, it has changed the way products are developed. Walker describes the unique challenges and questions raised by adopting an open source approach, including questions of sustainability, accountability, and monetization.

Walker admits that Red Hat remains the only company that has been successful with a pure open source business model (without being acquired). Many companies are still pursuing a similar model selling open source software, but other models around open source exist, including the venture-capitalist’s favorite open core model, a services and support model, and a hybrid model that mixes open source code with proprietary components.

In discussing the difference between open core and hybrid business models, Walker says his biggest problem with them is that they both assume there is no intrinsic value in the platform itself.

“I am not discounting the added value of proprietary software on top of open source platforms; I am suggesting that the open source platforms themselves are inherently valuable and can be sold as products in their own right, if done correctly,” Walker states.

“If you begin with the premise that open source platforms have great value, and you sell that value in the form of a certified software product, that’s just a starting point. The key is that you’re selling a certified version of an open source platform and from there, it’s up to you how to structure your product approach,” he continues.

What’s emerging now is a new “open platform model,” in which the open source platform itself is sold in the form of a certified product. It may include proprietary add-ons, but derives most of its value from the platform.

A Messy Business

Creating a business purely around an open source platform requires new thinking, and a new process. It’s difficult to turn the code that’s available to everyone for free into a product that just works and can be used at scale.

“Creating a product is a messy, messy business. There are multiple layers of QA, QE, and UX/UI design that, after years of effort, may result in something that somewhat resembles a usable product,” writes Walker.

Walker explains the distinction between an open source project and a product that’s based on that project. He points out that “creating, marketing and selling a product is no different in the open source space from any other endeavor.”

He details the process of making a product out of an open source project; it’s not nearly as easy as packing the code into a product and charging for it.

Mastering the Supply Chain

Part two of the ebook covers more advanced topics, including the management of open source software supply chains, which offers some unique challenges.

“A well-managed supply chain is crucial to business success. How do you determine ideal pricing, build relationships with the right manufacturers, and maximize the efficiency of your supply chain so you’re able to produce more products cheaply and sell more of them?” asks Walker.

“One potential conclusion is that to be successful at open source products, you must master the ability to influence and manage the sundry supply chains that ultimately come together in the product creation process,” he says.

In the final chapter, Walker takes a deep dive into the importance of being an influencer of the supply chain. He talks about some best practices in the process of evaluating supply chain components and gives examples of companies like Red Hat who have an upstream first policy that plays a big role in making them an influencer of the supply chain.

The crux is, “To get the most benefit from the open source software supply chain, you must be the open source software supply chain.”

Conclusion

It might sound easy to take some free source code, package it up, and create a product out of it. But in reality it’s a very challenging job. But if you do it right, an open source approach offers immense benefits that are unmatched in the closed source world.

That’s exactly what this book is all about. Doing it right. The methodologies and processes detailed by Walker will help companies, managers, and developers adopt best practices to create valuable open source products as open source business models shift, yet again.

Learn how to build a business on open source. Download the free ebook today!

Linux

Linux

via http://ift.tt/1Wf4iBh

June 5, 2017 at 05:27PM

Accidentally overwrote a binary file on Linux? Here is how to restore it

Accidentally overwrote a binary file on Linux? Here is how to restore it

http://ift.tt/2qgqSgq

Accidentally overwrote a binary file on Linux? Here is how to restore it

Posted on in Categories Command Line Hacks last updated May 23, 2017

A shell script went wild due to some bug, and the script overwrote a binary file /bin/ping. Here is how tor restores it.

/bin/ping erased
/bin/ping erased (Credit: http://ift.tt/1l0OTWM)


There are two ways to solve this problem.

Easy way: copy it from another server

Just scp file from another box running the same version of your Linux distribution:
$ sudo scp [email protected]:/bin/ping /bin/ping

Proper sysadmin way: Search and reinstall package

First, query the package which provides FILE /bin/ping as per your Linux distro:

Debian/Ubuntu Linux user type

$ dpkg -S /bin/ping
iputils-ping: /bin/ping

Now just reinstall the iputils-ping package using apt-get command or apt command:
$ sudo apt-get --reinstall install iputils-ping

RHEL/SL/Scientific/Oracle Linux user type

$ yum provides /bin/ping
iputils-20071127-24.el6.x86_64 : Network monitoring tools including ping

Now just reinstall the iputils package using yum command:
$ sudo yum reinstall iputils

Fedora Linux user type

$ dnf provides /bin/ping
iputils-20161105-1.fc25.x86_64 : Network monitoring tools including ping

Now just reinstall the iputils package using dnf command:
$ sudo dnf reinstall iputils

Arch Linux user type

$ pacman -Qo /bin/ping
/usr/bin/ping is owned by iputils 20161105.1f2bb12-2

Now just reinstall the iputils package using pacman command:
$ sudo pacman -S iputils

Suse/OpenSUSE Linux user type

$ zypper search -f /bin/ping
Sample outputs:

Loading repository data...
Reading installed packages...

S | Name    | Summary                            | Type   
--+---------+------------------------------------+--------
  | ctdb    | Clustered TDB                      | package
i | iputils | IPv4 and IPv6 Networking Utilities | package
  | pingus  | Free Lemmings-like puzzle game     | package

Now just reinstall the iputils package using zypper command:
$ sudo zypper -S iputils

What can be done to avoid such problem in future?

Testing in a sandbox is an excellent way to prevent such problem. Care must be taken to make sure that variable has value. The following is dangerous:
echo "foo" > $file
Maybe something like as follows would help (see “If Variable Is Not Defined, Set Default Variable“)
file="${1:-/tmp/file.txt}"
echo "foo" > $file

Another option is to stop if variable is not defined:
${Variable?Error \$Variable is not defined}

Linux

Linux

via [RSS/Feed] nixCraft: Linux Tips, Hacks, Tutorials, And Ideas In Blog Format http://ift.tt/Awfi7s

May 23, 2017 at 02:27PM

Say Hello to the Slimbook Pro, a 13-inch Linux Laptop

Say Hello to the Slimbook Pro, a 13-inch Linux Laptop

http://ift.tt/2qMtB4v

Spanish hardware company Slimbook is on a roll.

It already caters to the needs of KDE enthusiasts, and recently it unsheathed the impressive aluminium 15-inch Slimbook Excalibur.

Today the company takes the shrink wrap off of yet another Linux powered laptop.

Say hello to the Slimbook Pro.

Slimbook Pro Specs & Price

With an aluminium body, lightweight build and 13.1-inch display the Slimbook Pro is plays smaller sibling to the 15.6-inch Slimbook Excalibur.

The base model ship with a standard FHD (1920 x 1080) panel but for a mere €49 more you can upgrade this to a QHD+ (3200 x 1800) HiDPI display.

Canonical is sponsoring a HiDPI hackfest for GNOME right now, so if you opt to go HiDPI you can expect to see various improvements to the Ubuntu HiDPI experience is future releases.

Inside there’s a choice of 7th Gen Intel ‘Kaby Lake’ processors:

  • Intel i3-7100U @ 2.4GHz
  • Intel i5-7200U @ 2.5GHz
  • Intel i7-7500U @ 2.7GHz

The integrated graphics of Kaby Lake won’t handle top-tier gaming titles at max frame rates but is perfectly adequate for most needs.

All models come with 4GB RAM as standard (8GB and 16GB upgrades available). Base storage is 120GB SSD, with a variety of upgrades and second hard disk options available at extra cost — yup, there’s enough space inside for 2 hard drives.

You can also elect to kit your Slimbook Pro out with Intel Dual Band 8265AC WiFi, which apparently has “better signal and stability in the latest Linux kernels“.

 

As a portable laptop and not a portable workstation power consumption is a key consideration. Improvements in the Linux Kernel combined with the lower power consumption of Intel’s Kaby Lake processors means you should expect to a decent amount of battery life from the Pro (for comparison, the 13.1-inch System76 Galago Pro manages a terse 4 hours max) but you should be able to eke a bit more out of the Slimbook Pro.

Ports wise the laptop has all you could ask, including

  • Full-size HDMI out
  • Mini Display Port
  • 2x USB 3.1
  • 1x USB Type-C
  • SD card slot
  • Ethernet RJ45 jack
  • Courage jack
  • Mic jack

Along with a full-size backlit keyboard in either American or Spanish layouts, the Slimbook Pro also uses a Synaptics touchpad.

 

The Slimbook Pro price starts at €699 for the base Intel i3 model.

There is no denying that, yes, with smaller computer companies selling Linux products you do tend to pay a little bit more for the privilege — but that’s economies of scale for you; you can’t expect to pay own-brand label prices for what is, in effect, organic produce.

Linux

Linux

via OMG! Ubuntu! http://ift.tt/eCozVa

May 23, 2017 at 05:55PM

A Brief Look at the Roots of Linux Containers

A Brief Look at the Roots of Linux Containers

http://ift.tt/2q2SBFX

In previous excerpts of the new, self-paced Containers Fundamentals course from The Linux Foundation, we discussed what containers are and are not. Here, we’ll take a brief look at the history of containers, which includes chroot, FreeBSD jails, Solaris zones, and systemd-nspawn. 

Chroot was first introduced in 1979, during development of Seventh Edition Unix (also called Version 7), and was added to BSD in 1982. In 2000, FreeBSD extended chroot to FreeBSD Jails. Then, in the early 2000s, Solaris introduced the concept of zones, which virtualized the operating system services.

With chroot, you can change the apparent root directory for the currently running process and its children. After configuring chroot, subsequent commands will run with respect to the new root (/). With chroot, we can limit the processes only at the filesystem level, but they share the resources, like users, hostname, IP address, etc. FreeBSD Jails extended the chroot model by virtualizing users, network sub-systems, etc.

systemd-nspawn has not been around as long as chroot and Jails, but it can be used to create containers, which would be managed by systemd. On modern Linux operating systems, systemd is used as an init system to bootstrap the user space and manage all the processes subsequently.

This training course, presented mainly in video format, is aimed at those who are new to containers and covers the basics of container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more.

You can learn more in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!

Linux

Linux

via http://ift.tt/1Wf4iBh

May 22, 2017 at 09:58AM

WannaCrypt makes an easy case for Linux

WannaCrypt makes an easy case for Linux

http://ift.tt/2pVIkeQ

linuxhero.jpg

Image: Jack Wallen

Ransomware is on the rise. On a single day, WannaCrypt held hostage over 57,000 users worldwide, demanding anywhere between $300-$600 in Bitcoin. Don’t pay up and you’ll not be seeing your data again. Before I get into the thrust of this piece, if anything, let WannaCrypt be a siren call to everyone to backup your data. Period. End of story. With a solid data backup, should you fall prey to ransomware, you are just an OS reinstall and a data restore away from getting back to work.

That being said, if there was ever a time for Linux to shine on the desktop, it’s now. I know, I know. Eyes are being rolled and cries of “This again?” are bouncing across the whole of the internet.

Hear me out.

This particular ransomware was nasty; not just in scope, but in design. Consider this:

  • WannaCrypt possesses the capability to spread itself
  • WannaCrypt exploits a known vulnerability in Windows
  • WannaCrypt uses the SMB protocol which is often unfiltered within corporate networks
  • The tools behind WannaCrypt (EternalBlue and DoublePulsar) originated within the NSA
  • Computers in 150 countries were affected (including machines within FedEx, Renault, Telefonica, as well as hospital computer systems across Europe)

The above knowledge (and more) can be found reported just about anywhere (as well as the story behind the man who stopped new infections). The thing is, WannaCrypt isn’t the first of its kind. In fact, ransomware has been exploiting Windows vulnerabilities for a while. The first known ransomware attack was called “AIDS Trojan” that infected Windows machines back in 1989. This particular ransomware attack switched the autoexec.bat file. This new file counted the amount of times a machine had been booted; when the machine reached a count of 90, all of the filenames on the C drive were encrypted.

SEE: Patching WannaCrypt: Dispatches from the frontline

Windows, of course, isn’t the only platform to have been hit by ransomware. In fact, back in 2015, the LinuxEncoder ransomware was discovered. That bit of malicious code, however, only affected servers running the Magento ecommerce solution.

The important question here is this: Have their been any ransomware attacks on the Linux desktop? The answer is no.

With that in mind, it’s pretty easy to draw the conclusion that now would be a great time to start deploying Linux on the desktop.

But, but, but!

I can already hear the tired arguments. The primary issue: software. I will counter that argument by saying this: Most software has migrated to either Software as a Service (SaaS) or the cloud. The majority of work people do is via a web browser. Chrome, Firefox, Edge, Safari; with few exceptions, SaaS doesn’t care. With that in mind, why would you want your employees and staff using a vulnerable system?

Consider this: If you have an employee that works a crucial position out in the field and you provide their transportation, would you have them driving a vehicle with a known issue? Say, you know the vehicle has a cracked engine block or frame and could, at any minute, suffer catastrophic failure. That failure could (at best) be the cause of the employee losing a day’s work and (at worst) endanger that employee’s life.

Would you willingly send that employee out in the vehicle? No, you wouldn’t.

Apply that same analogy to your staff computers. Why would you willingly expect them to work with a platform that has suffered from vulnerabilities known to lead to such exploits as WannaCrypt; vulnerabilities that (at best) cause said employee to lose a day’s work and (at worst) dox said employee or negatively impact your bottom line? The difference here is that you would be (and are) willing to deploy systems that are a malformed URL away from compromise.

SEE: Why patching Windows XP forever won’t stop the next WannaCrypt

Nothing is perfect

Don’t get me wrong, I’m not saying Linux is perfect. Any system connected to a network can fall victim to something. But the truth of the matter is, by design, Linux is far less susceptible to the likes of WannaCrypt than is Windows. How do I know this? I’ve been using Linux as my only operating system (on servers and desktops) since 1997 and have only encountered one instance of malicious code (a rootkit on a poorly administered mail server). Those are some pretty good odds there.

Imagine, if you will, you have deployed Linux as a desktop OS for your company and those machines work like champs from the day you set them up to the day the hardware finally fails. Doesn’t that sound like a win your company could use? If your employees work primarily with SaaS (through web browsers), then there is zero reason keeping you from making the switch to a more reliable, secure platform.

Don’t fear change

I get it; I really do. From top to bottom, people fear change. But this fear has been assuaged with users working primarily within a tool that holds a significant amount of universality. I’m talking about the web browser; a piece of software that anyone can use (with ease) regardless of platform. Every browser (Chrome, Firefox, Edge, Safari, etc.) functions in similar fashion, no matter the underlying operating system. That, in and of itself, has placed platform in the shadows. So unless your company depends upon a proprietary software system that was designed for (and only runs in) Windows, not making the move to Linux desktops is inviting trouble.

Make the switch and avoid the likes of WannaCrypt.

Also see

Linux

Linux

via LXer Linux News http://lxer.com/

May 20, 2017 at 07:26AM