A Modern Day Front-End Development Stack

A Modern Day Front-End Development Stack

http://ift.tt/2t9uEdT

Application development methodologies have seen a lot of change in recent years. With the rise and adoption of microservice architectures, cloud computing, single-page applications, and responsive design to name a few, developers have many decisions to make, all while still keeping project timelines, user experience, and performance in mind. Nowhere is this more true than in front-end development and JavaScript.

To help catch everyone up, we’ll take a brief look at the revolution in JavaScript development over the last few years. Next, we’ll look at the some of the challenges and opportunities facing the front-end development community. To wrap things up, and to help lead into the next parts of this series, we’ll preview the components of a fully modern front-end stack.

The JavaScript Renaissance

When NodeJS came out in 2009, it was more than just JavaScript on the command line or a web server running in JavaScript. NodeJS revolutionized a concentration of software development around something that was so desperately needed: a mature and stable ecosystem focused on the front-end developer. Thanks to Node and its default package manager, npm, JavaScript saw a renaissance in how applications could be architected (e.g., Angular leveraging Observables or the functional paradigms of React) as well as how they were developed. The ecosystem thrived, but because it was young it also constantly churned.

Happily, the past few years have allowed certain patterns and conventions to rise to the top. In 2015, the JavaScript community saw the release of a new spec, ES2015, along with an even greater explosion in the ecosystem. The illustration below shows just some of the most popular JavaScript ecosystem elements.

FrontendToolingArray.png

State of the JavaScript ecosystem in 2017

At Kenzan, we’ve been developing JavaScript applications for more than 10 years on a variety of platforms, from browsers to set-top boxes. We’ve watched the front-end ecosystem grow and evolve, embracing all the great work done by the community along the way. From Grunt™ to Gulp, from jQuery® to AngularJS, from copying scripts to using Bower for managing our front-end dependencies, we’ve lived it.

As JavaScript matured, so did our approach to our development processes. Building off our passion for developing well-designed, maintainable, and mature software applications for our clients, we realized that success always starts with a strong local development workflow and stack. The desire for dependability, maturity, and efficiency in the development process led us to the conclusion that the development environment could be more than just a set of tools working together. Rather, it could contribute to the success of the end product itself.  

Challenges and Opportunities

With so many choices, and such a robust and blossoming ecosystem at present, where does that leave the community? While having choices is a good thing, it can be difficult for organizations to know where to start, what they need to be successful, and why they need it. As user expectations grow for how an application should perform and behave (load faster, run more smoothly, be responsive, feel native, and so on), it gets ever more challenging to find the right balance between the productivity needs of the development team and the project’s ability to launch and succeed in its intended market. There is even a term for this called analysis paralysis, which is a difficulty in arriving at a decision due to overthinking and needlessly complicating a problem.

Chasing the latest tools and technologies can inhibit velocity and the achievement of significant milestones in a project’s development cycle, risking time to market and customer retention. At a certain point an organization needs to define its problems and needs, and then make a decision from the available options, understanding the pros and cons so that it can better anticipate the long-term viability and maintainability of the product.

At Kenzan, our experience has led us to define and coalesce around some key concepts and philosophies that ensure our decisions will help solve the challenges we’ve come to expect from developing software for the front end:

  • Leverage the latest features available in the JavaScript language to support more elegant, consistent, and maintainable source code (like import / export (modules), class, and async/await).

  • Provide a stable and mature local development environment with low-to-no maintenance (that is, no global development dependencies for developers to install or maintain, and intuitive workflows/tasks).

  • Adopt a single package manager to manage front-end and build dependencies.

  • Deploy optimized, feature-based bundles (packaged HTML, CSS, and JS) for smarter, faster distribution and downloads for users. Combined with HTTP/2, large gains can be made here for little investment to greatly improve user experience and performance.

A New Stack

In this series, our focus is on three core components of a front-end development stack. For each component, we’ll look at the tool that we think brings the best balance of dependability, productivity, and maintainability to modern JavaScript application development, and that are best aligned around our desired principals.  

Package Management: Yarn

The challenge of how to manage and install external vendor or internal packages in a dependable and consistently-reproducible way is critical to the workflow of a developer. It’s also critical for maintaining a CI/CD (continuous integration/continuous delivery) pipeline. But, which package manager do you choose given all the great options available to evaluate? npm? jspm? Bower? CDN? Or do you just copy and paste from the web and commit to version control?    

Our first article will look at Yarn and how it focuses on being fast and providing stable builds. Yarn accomplishes this by ensuring the version of a vendor dependency installed today will be the exact same version installed by a developer next week. It is imperative that this process is frictionless and reliable, distributed and at scale, because any downtime prevents developers from being able to code or deploy their applications. Yarn aims to address these concerns by providing a fast, reliable alternative to the npm cli for managing dependencies, while continuing to leverage the npm registry as the host for public Node packages. Plus it’s backed by Facebook, an organization that has scale in mind when developing their tooling.

Application Bundling: webpack™

The orchestration of building a front-end application, which is typically comprised of a mix of HTML, CSS, and JS, as well as binary formats like images and fonts, can be tricky to maintain and even more challenging to orchestrate. So how does one turn a code base into an optimized, deployable artifact? Gulp? Grunt? Browserify? Rollup? SystemJS? All of these are great options that provide their own strengths and weaknesses, but we need to make sure the choice reflects our intended principals we discussed above.

webpack is a build tool specifically designed to package and deploy web applications comprised of any kind of potential assets (HTML, CSS, JS, images, fonts, and so on) into an optimized payload to deliver to users. We want to take advantage of the latest language features like import/export and class to make our code future-facing and clean, while letting the tooling orchestrate the bundling of our code such that it is optimized for both the browser and the user. webpack can do just that, and more!

Language Specification: TypeScript

Writing clean code in and of itself is always a challenge. JavaScript, which is a dynamic language and loosely typed, has afforded developers a medium to implement a wide range of design patterns and conventions. Now, with the latest JavaScript specification, we see more solid patterns from the programming community making their way into the language. Support for features like the use of import/export and class have brought a fundamental paradigm shift to how a JavaScript application can be developed, and can help ensure that code is easier to write, read, and maintain. However, there is still a gap in the language that generally begins to impact applications as they grow: maintainability and integrity of the source code, and predictability of the system (the application state at runtime).

TypeScript is superset of JavaScript that adds type safety, access modifiers (private and public), and newer features from the next JavaScript specification. The security in a more strictly typed language can help promote and then enforce architectural design patterns by using a transpiler to validate code before it even gets to the browser, which helps to reduce developer cycle time while also being self-documenting. This is particularly advantageous because, as applications grow and change happens within the codebase, TypeScript can help keep regressions in check while adding clarity and confidence to the code base. IDE integration is also a huge win here as well.

What About Front-End Frameworks?

As you may have noticed, so far we’ve intentionally avoided recommending a front-end framework or library like Angular or React, so let’s address that now.

Different applications call for different approaches to their development based on many factors like team experience, scope and size, organizational preference, and familiarity with concepts like reactive or functional programming. At Kenzan, we believe evaluating and choosing any ES2015/TypeScript compatible library or framework, be it Angular 2 or React, should be based on characteristics specific to the given situation.  

If we revisit our illustration from earlier, we can see a new stack take form that provides flexibility in choosing front-end frameworks.

FrontendToolingSimplified.png
A modern stack that offers flexibility in front-end frameworks

Below this upper “view” layer is a common ground that can be built upon by leveraging tools that embrace our key principles. At Kenzan, we feel that this stack converges on a space that captures the needs of both user and developer experience. This yields results that can benefit any team or application, large or small. It is important to remember that the tools presented here are intended for a specific type of project development (front-end UI application), and that this is not intended to be a one-size-fits-all endorsement. Discretion, judgement, and the needs of the team should be the prominent decision-making factors.

What’s Next

So far, we’ve looked back at how the JavaScript renaissance of the last few years has led to a rapidly-maturing JavaScript ecosystem. We laid out the core philosophies that have helped us to meet the challenges and opportunities of developing software for the front end. And we outlined three main components of a modern front-end development stack. Throughout the rest of this series, we’ll dive deeper into each of these components. Our hope is that, by the end, you’ll be in a better position to evaluate the infrastructure you need for your front-end applications.

We also hope that you’ll recognize the value of the tools we present as being guided by a set of core principles, paradigms, and philosophies. Writing this series has certainly caused us to put our own experience and process under the microscope, and to solidify our rationale when it comes to front-end tooling. Hopefully, you’ll enjoy what we’ve discovered, and we welcome any thoughts, questions, or feedback you may have.

Next up in our blog series, we’ll take a closer look at the first core component of our front-end stack—package management with Yarn.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

Grunt, jQuery, and webpack are trademarks of the JS Foundation.

Linux

Linux

via LXer Linux News http://lxer.com/

July 18, 2017 at 03:35AM

Linux df Command Tutorial for Beginners (8 Examples)

Linux df Command Tutorial for Beginners (8 Examples)

http://ift.tt/2u48jPY

Linux df Command Tutorial for Beginners (8 Examples)

Sometimes, you might want to know how much space is consumed (and how much is free) on a particular file system on your Linux machine. There a specific command – dubbed df – that does this for you. In this tutorial, we will discuss the basics of this command, as well as some of the major features it offers.

But before we do that, it’s worth mentioning that all examples and instructions mentioned in the article have been tested on Ubuntu 16.04 LTS.

Linux df command

Here’s the syntax of this tool:

df [OPTION]… [FILE]…

And here’s how the man page describes the command:

df displays the amount of disk space available on the file system containing each file name  
argument. If no file name is given, the space available on all currently mounted file systems
is shown.

The following Q&A-type examples should give you a better idea on how this command line utility works.

Q1. How to make df display disk usage of file system containing a specific file?

Suppose you have a file (say, file1), and the requirement is to display the available or used space on the file system that contains this file. This is how you can do this:

df <filename>

Here’s an example:

make df display disk usage of file system containing a specific file

Q2. How to make df display disk usage of all file systems?

In case you want the tool to display disk usage information for all file systems, all you need to do is to run the following command:

df

Here’s the output of the command in my case:

make df display disk usage of all file systems

Q3. How to make df display usage information in human-readable form?

If the requirement is to make df display disk usage information in human-readable form, then you can use the -h option in that case.

df -h

Here’s an example:

make df display usage information in human-readable form

Observe the letters ‘G’ and ‘M’ that represent Gigabytes and Megabytes, making it easy for users to read these size figures.

Q4. How to make df display inode information instead of block usage?

If instead of block usage, you want df to display inode information in output, then this can be done using the -i command line option.

df -i

make df display inode information instead of block usage

Observe that the second, third, and fourth columns now display inode-related figures.

To produce total for size, used, and avail columns, use the –total option.

df –total

make df produce total of all block-related info

Observe that a new row gets added at the bottom that displays total values.

Q6. How to make df print file system type in output?

While df produces file system name by default, you can force it to display the corresponding type as well, something which can be done through the -T option.

df -T

make df print file system type in output

A new, second column (Type) is where the type-related information is displayed

Q7. How to limit df output to file systems of particular type?

You can also limit the output of the df command to a particular type of filesystems. This can be done using the -t option, which requires you to enter the file system name as its value.

df -t <filesystem-name>

Following is an example:

limit df output to file systems of particular type

Q8. How make df exclude a particular file system type?

Similar to the way you include, you can also make df exclude a particular type of file system in its output. The command line option for this is -x.

df -x <filesystem-name>

Following is an example:

make df exclude a particular file system type

So you can see that no entry for the tmpfs file system was produced in the output.

Conclusion

Clearly, df isn’t a difficult tool to understand and use, primarily because majority of its command line options are targeted at customizing the way the tool’s output is produced. We’ve covered many of the important options here. Once you are done practicing them, head to the command’s man page to know more about it.

Linux

Linux

via Howtoforge Linux Howtos und Tutorials http://ift.tt/179bQQd

July 18, 2017 at 12:22PM

Stats Say Linux Marketshare Hit All-Time High Last Month

Stats Say Linux Marketshare Hit All-Time High Last Month

http://ift.tt/2tEPsg5

linux marketshare june 2017Desktop Linux marketshare hit an all-time high last month, according to the latest data from web analytics firm NetMarketShare.

The company report that Linux users made up 2.36% of tracked visits to websites it tracks last month, the highest the Linux figure has ever been.

Not that this uptick is surprising. It continues a trend we’ve seen over the past 12 months which has (more of less) seen Linux usage rank firmly above the 2% line on NetMarketShare.

More impressively, the latest figures means Linux usage is roughly a third of that of Apple macOS, which sits at 6.12% in the June 2017 rankings, down on the previous month.

The combined flavors of Microsoft Windows (who else?) continue to eat up the lion’s share of the desktop operating system chart, falling to 91.51% during the same period.

Caution: Caution Advised

‘As we always say when we present stats like this: take ’em with a large pinch of Na. ‘

As we always say when we present stats like this: make sure you take ’em with a large pinch of Na.

Why?

It’s because statistics, numbers and reporting methods not only vary between competing analytics companies but are also open to interpretation, debate and potential errors.

Furthermore NetMarketShare accrues its data based on visits to a mere 40,000 websites globally. While 40,000 is a largish sample size it’s also ludicrously small when compared against the number of websites that are out there!

Finally, while the Linux figure reported does exclude Android/Linux it does include ChromeOS/Linux in addition to GNU/Linux, leading some to attribute the rise in Linux marketshare to Google’s Chrome OS.

Linux

Linux

via OMG! Ubuntu! http://ift.tt/eCozVa

July 3, 2017 at 07:12PM

Linux is Running on Almost All of the Top 500 Supercomputers

Linux is Running on Almost All of the Top 500 Supercomputers

http://ift.tt/2rVAK0R

Linux is Running on Almost All of the Top 500 Supercomputers

Linux rules supercomputers

Brief: Linux may not have a decent market share in desktop, but it rules the supercomputers with 498 out of the top 500 supercomputers running on Linux.

Linux is still running on more than 99% of the top 500 fastest supercomputers in the world. Same as last year, 498 out of top 500 supercomputers run Linux while remaining 2 run Unix.

No supercomputer dared to run Windows (pun intended). And of course, no supercomputer runs macOS because Apple has not manufactured the ‘iSupercomputer’ yet.

This information is collected by an independent organization Top500 that publishes the details about the top 500 fastest supercomputers known to them, twice a year. You can go the website and filter out the list based on country, OS type used, vendors etc.Don’t worry, I’ll do it for you to present some of the most interesting facts from this year’s list.

No worries if you don’t want to do that because I’ll present some of the interesting facts here.

Linux rules supercomputers because it is open source

20 years back, most of the supercomputers ran Unix. But eventually, Linux took the lead and become the preferred choice of operating system for the supercomputers.

Growth of Linux on SupercomputersGrowth of Linux on Supercomputers. Image credit: ZDNet

The main reason for this growth is the open source nature of Linux. Supercomputers are specific devices built for specific purposes. This requires a custom operating system optimized for those specific needs.

Unix, being a closed source and propriety operating system, is an expensive deal when it comes to customization. Linux, on the other hand, is free and easier to customize. Engineering teams can easily customize a Linux-based operating system for each of the supercomputers.

However, I wonder why open source variants such as FreeBSD failed to gain popularity on supercomputers.

To summarize the list of top 500 supercomputers based on OS this year:

  • Linux: 498
  • Unix: 2
  • Windows: 0
  • MacOS: 0

To give you a year wise summary of Linux shares on the top 500 supercomputers:

  • In 2012: 94%
  • In 2013: 95%
  • In 2014: 97%
  • In 2015: 97.2%
  • In 2016: 99.6%
  • In 2017: 99.6
  • In 2018: ???

The only two supercomputers running Unix are ranked 493rd and 494th:

Top 500 Supercomputers running UnixSupercomputers running Unix

Some other interesting stats about fastest supercomputers

Top 10 Fastest Supercomputers in 2017Top 10 Fastest Supercomputers in 2017

Moving Linux aside, here are some other interesting stats about supercomputers this year:

  • World’s fastest supercomputer Sunway TaihuLight is based in National Supercomputing Center in Wuxi, China. It has a speed of 93PFLOPS.
  • World’s second fastest supercomputer is also based in China (Tianhe-2) while the third spot is taken by Switzerland-based Piz Daint.
  • Out of the top 10 fastest supercomputers, USA has 5, Japan and China have 2 each while Switzerland has 1.
  • United Staes leads with 168 supercomputers in the list followed by China with 160 supercomputers.
  • Japan has 33, Germany has 28, France has 18, Saudi Arabia has 6, India has 4 and Russia has 3 supercomputers in the list.

Some interesting facts, isn’t it? You can filter out your own list here to further details.

While you are reading it, do share this article on social media. It’s an achievement for Linux and we got to show off 😀

Linux

Linux

via LXer Linux News http://lxer.com/

June 27, 2017 at 05:01AM

CRM: The Same Difference for All Businesses

A Customer Relationship Manager is the most significant, and in my opinion, the most underutilized business tool. Practically overlooked by small businesses, a CRM should be what most small businesses invest in first, well before business cards, office space, or even a Web site. Having a good CRM can help you visualize, plan, execute, and fine tune your sales process, business workflow, communications, marketing and customer service – all from one place. Being able to do all of that, as a small business owner, will make you all that more organized when you’re ready to open your doors, real or virtual.

Continue reading “CRM: The Same Difference for All Businesses”

A quick guide to using FFmpeg to convert media files

A quick guide to using FFmpeg to convert media files

http://ift.tt/2qRJBUI

A quick guide to using FFmpeg to convert media files

There are many open source tools out there for editing, tweaking, and converting multimedia into exactly what you need. Tools like Audacity or Handbrake are fantastic, but sometimes you just want to change a file from one format into another quickly. Enter FFmpeg.

read more

Linux

Linux

via Opensource.com http://ift.tt/1EBSQUh

June 5, 2017 at 03:14AM

How to configure Nginx SSL/TLS passthrough with TCP load balancing

How to configure Nginx SSL/TLS passthrough with TCP load balancing

http://ift.tt/2s04gG9

How do I configure SSL/TLS pass through on Nginx load balancer running on Linux or Unix-like system? How do I load balance TCP traffic and setup SSL Passthrough to pass SSL traffic received at the load balancer onto the backend web servers?

Linux

Linux

via [RSS/Feed] nixCraft: Linux Tips, Hacks, Tutorials, And Ideas In Blog Format http://ift.tt/Awfi7s

June 6, 2017 at 10:03AM

Learn the Secrets of Building a Business with Open Source

Learn the Secrets of Building a Business with Open Source

http://ift.tt/2sKPgZ2

Today, if you’re building a new product or service, open source software is likely playing a role. But many entrepreneurs and product managers still struggle with how to build a successful business purely on open source.                    

The big secret of a successful open source business is that “it’s about way more than the code,” says John Mark Walker, a well-known voice in the open source world with extensive expertise in open source product, community, and ecosystem creation at Red Hat and Dell EMC.  “In order to build a certified, predictable, manageable product that ‘just works,’ it requires a lot more effort than just writing good code.”

It requires a solid understanding of open source business models and the expertise and management skills to take advantage of developing your products in an open source way.

In a new eBook, Building a Business on Open Source, The Linux Foundation has partnered with Walker to distill what it takes to create and manage a product or service built on open source software.  It starts with an overview of the various business models, then covers the business value of the open source platform itself, and describes how to create a successful open source product and manage the software supply chain behind it.  

“If you’re developing software in an open source way, you have options that proprietary developers don’t have,” Walker writes. “You can deliver better software more efficiently that is more responsive to customer needs — if you do it well and apply best practices.”

The Value of the Open Source Platform            

As open source has become more prevalent, it has changed the way products are developed. Walker describes the unique challenges and questions raised by adopting an open source approach, including questions of sustainability, accountability, and monetization.

Walker admits that Red Hat remains the only company that has been successful with a pure open source business model (without being acquired). Many companies are still pursuing a similar model selling open source software, but other models around open source exist, including the venture-capitalist’s favorite open core model, a services and support model, and a hybrid model that mixes open source code with proprietary components.

In discussing the difference between open core and hybrid business models, Walker says his biggest problem with them is that they both assume there is no intrinsic value in the platform itself.

“I am not discounting the added value of proprietary software on top of open source platforms; I am suggesting that the open source platforms themselves are inherently valuable and can be sold as products in their own right, if done correctly,” Walker states.

“If you begin with the premise that open source platforms have great value, and you sell that value in the form of a certified software product, that’s just a starting point. The key is that you’re selling a certified version of an open source platform and from there, it’s up to you how to structure your product approach,” he continues.

What’s emerging now is a new “open platform model,” in which the open source platform itself is sold in the form of a certified product. It may include proprietary add-ons, but derives most of its value from the platform.

A Messy Business

Creating a business purely around an open source platform requires new thinking, and a new process. It’s difficult to turn the code that’s available to everyone for free into a product that just works and can be used at scale.

“Creating a product is a messy, messy business. There are multiple layers of QA, QE, and UX/UI design that, after years of effort, may result in something that somewhat resembles a usable product,” writes Walker.

Walker explains the distinction between an open source project and a product that’s based on that project. He points out that “creating, marketing and selling a product is no different in the open source space from any other endeavor.”

He details the process of making a product out of an open source project; it’s not nearly as easy as packing the code into a product and charging for it.

Mastering the Supply Chain

Part two of the ebook covers more advanced topics, including the management of open source software supply chains, which offers some unique challenges.

“A well-managed supply chain is crucial to business success. How do you determine ideal pricing, build relationships with the right manufacturers, and maximize the efficiency of your supply chain so you’re able to produce more products cheaply and sell more of them?” asks Walker.

“One potential conclusion is that to be successful at open source products, you must master the ability to influence and manage the sundry supply chains that ultimately come together in the product creation process,” he says.

In the final chapter, Walker takes a deep dive into the importance of being an influencer of the supply chain. He talks about some best practices in the process of evaluating supply chain components and gives examples of companies like Red Hat who have an upstream first policy that plays a big role in making them an influencer of the supply chain.

The crux is, “To get the most benefit from the open source software supply chain, you must be the open source software supply chain.”

Conclusion

It might sound easy to take some free source code, package it up, and create a product out of it. But in reality it’s a very challenging job. But if you do it right, an open source approach offers immense benefits that are unmatched in the closed source world.

That’s exactly what this book is all about. Doing it right. The methodologies and processes detailed by Walker will help companies, managers, and developers adopt best practices to create valuable open source products as open source business models shift, yet again.

Learn how to build a business on open source. Download the free ebook today!

Linux

Linux

via http://ift.tt/1Wf4iBh

June 5, 2017 at 05:27PM

Accidentally overwrote a binary file on Linux? Here is how to restore it

Accidentally overwrote a binary file on Linux? Here is how to restore it

http://ift.tt/2qgqSgq

Accidentally overwrote a binary file on Linux? Here is how to restore it

Posted on in Categories Command Line Hacks last updated May 23, 2017

A shell script went wild due to some bug, and the script overwrote a binary file /bin/ping. Here is how tor restores it.

/bin/ping erased
/bin/ping erased (Credit: http://ift.tt/1l0OTWM)


There are two ways to solve this problem.

Easy way: copy it from another server

Just scp file from another box running the same version of your Linux distribution:
$ sudo scp [email protected]:/bin/ping /bin/ping

Proper sysadmin way: Search and reinstall package

First, query the package which provides FILE /bin/ping as per your Linux distro:

Debian/Ubuntu Linux user type

$ dpkg -S /bin/ping
iputils-ping: /bin/ping

Now just reinstall the iputils-ping package using apt-get command or apt command:
$ sudo apt-get --reinstall install iputils-ping

RHEL/SL/Scientific/Oracle Linux user type

$ yum provides /bin/ping
iputils-20071127-24.el6.x86_64 : Network monitoring tools including ping

Now just reinstall the iputils package using yum command:
$ sudo yum reinstall iputils

Fedora Linux user type

$ dnf provides /bin/ping
iputils-20161105-1.fc25.x86_64 : Network monitoring tools including ping

Now just reinstall the iputils package using dnf command:
$ sudo dnf reinstall iputils

Arch Linux user type

$ pacman -Qo /bin/ping
/usr/bin/ping is owned by iputils 20161105.1f2bb12-2

Now just reinstall the iputils package using pacman command:
$ sudo pacman -S iputils

Suse/OpenSUSE Linux user type

$ zypper search -f /bin/ping
Sample outputs:

Loading repository data...
Reading installed packages...

S | Name    | Summary                            | Type   
--+---------+------------------------------------+--------
  | ctdb    | Clustered TDB                      | package
i | iputils | IPv4 and IPv6 Networking Utilities | package
  | pingus  | Free Lemmings-like puzzle game     | package

Now just reinstall the iputils package using zypper command:
$ sudo zypper -S iputils

What can be done to avoid such problem in future?

Testing in a sandbox is an excellent way to prevent such problem. Care must be taken to make sure that variable has value. The following is dangerous:
echo "foo" > $file
Maybe something like as follows would help (see “If Variable Is Not Defined, Set Default Variable“)
file="${1:-/tmp/file.txt}"
echo "foo" > $file

Another option is to stop if variable is not defined:
${Variable?Error \$Variable is not defined}

Linux

Linux

via [RSS/Feed] nixCraft: Linux Tips, Hacks, Tutorials, And Ideas In Blog Format http://ift.tt/Awfi7s

May 23, 2017 at 02:27PM