Beyond Leads: A CRM for Every Business

Getting good leads is a vital element in any business. A steady supply of potential customers will ensure that your business continues to grow. But what happens after you get the lead? How do you convert them into customers? And how do you stay engaged with your existing customers?

Following up on leads is one of the harder tasks a sole business owner faces. After all, owners are usually the subject matter experts, not the salespeople. In small businesses, however, owners have to do everything, sometimes all at once. And that means having a streamlined and optimized customer relationship management system so they can take care of the “other” business tasks quickly and efficiently.

Most small business owners I’ve spoken to say the same thing, “Why do I need a CRM? It’s just me.” That’s precisely why a CRM system is crucial for sole ownership businesses; it’s just one person doing everything. Spending your time on repetitive tasks takes away from focusing on your core business activities, and that can lead to lost business, work backlog, and frustrated customers.

A CRM is an essential tool for your business, no matter what business you’re in. Do you have customers? Do you offer products or services? It doesn’t matter what industry you’re in, a good CRM tool can be adapted and customized for your business’ specific needs.

Let’s take a quick look at some of the tasks a CRM can do for you automatically, besides lead generation.

Data Organization

One of the primary jobs of a good CRM is to help you organize your customer data, including both potential and existing customers. It’s been estimated that people spend as much as 30% of their workday searching for the information they need. Being able to sift through your customer data quickly and efficiently will free up time to do more critical work.

Sales Pipeline Tracking

If you have multiple customers at different stages of your sales process, it’s helpful to know where you are with which customer, and on which customers you need to focus your immediate attention. It can also help you plan your marketing and advertising strategy for the coming quarter.

Quotes and Invoicing

Customers who need price quotes for approvals will appreciate the speed at which you can send out a price quote. By setting up a template directly in the CRM, you’ll save time copying and pasting information from the CRM to a price quote template in a word processing program.

Newsletter Mailing

Use your CRM to automate mailings to your customers to inform them of specials, new products, and so on. A good CRM will be able to create a dynamic list based on customer preferences that you’ve entered into your CRM.

Realtime Data

If your business needs to supply up-to-the-minute information to your website, you can have your CRM automate the process, rather than having to enter the data in two places, which could lead to out of sync or data errors.

By making sure your CRM is doing all of the repetitive work that takes up valuable time, you can focus more on your business.

60 Second Solution: Incentive for Customer Engagement on Social Media

Local businesses have an interest in creating social media incentives for their in-store customers. Online recommendations can mean all the difference to businesses in competitive markets. Personal recommendations from friends are much more likely to draw customers than regular online advertising. Try this incentive solution to increase your online social media engagement.
Continue reading “60 Second Solution: Incentive for Customer Engagement on Social Media”

A Modern Day Front-End Development Stack

A Modern Day Front-End Development Stack

Application development methodologies have seen a lot of change in recent years. With the rise and adoption of microservice architectures, cloud computing, single-page applications, and responsive design to name a few, developers have many decisions to make, all while still keeping project timelines, user experience, and performance in mind. Nowhere is this more true than in front-end development and JavaScript.

To help catch everyone up, we’ll take a brief look at the revolution in JavaScript development over the last few years. Next, we’ll look at the some of the challenges and opportunities facing the front-end development community. To wrap things up, and to help lead into the next parts of this series, we’ll preview the components of a fully modern front-end stack.

The JavaScript Renaissance

When NodeJS came out in 2009, it was more than just JavaScript on the command line or a web server running in JavaScript. NodeJS revolutionized a concentration of software development around something that was so desperately needed: a mature and stable ecosystem focused on the front-end developer. Thanks to Node and its default package manager, npm, JavaScript saw a renaissance in how applications could be architected (e.g., Angular leveraging Observables or the functional paradigms of React) as well as how they were developed. The ecosystem thrived, but because it was young it also constantly churned.

Happily, the past few years have allowed certain patterns and conventions to rise to the top. In 2015, the JavaScript community saw the release of a new spec, ES2015, along with an even greater explosion in the ecosystem. The illustration below shows just some of the most popular JavaScript ecosystem elements.


State of the JavaScript ecosystem in 2017

At Kenzan, we’ve been developing JavaScript applications for more than 10 years on a variety of platforms, from browsers to set-top boxes. We’ve watched the front-end ecosystem grow and evolve, embracing all the great work done by the community along the way. From Grunt™ to Gulp, from jQuery® to AngularJS, from copying scripts to using Bower for managing our front-end dependencies, we’ve lived it.

As JavaScript matured, so did our approach to our development processes. Building off our passion for developing well-designed, maintainable, and mature software applications for our clients, we realized that success always starts with a strong local development workflow and stack. The desire for dependability, maturity, and efficiency in the development process led us to the conclusion that the development environment could be more than just a set of tools working together. Rather, it could contribute to the success of the end product itself.  

Challenges and Opportunities

With so many choices, and such a robust and blossoming ecosystem at present, where does that leave the community? While having choices is a good thing, it can be difficult for organizations to know where to start, what they need to be successful, and why they need it. As user expectations grow for how an application should perform and behave (load faster, run more smoothly, be responsive, feel native, and so on), it gets ever more challenging to find the right balance between the productivity needs of the development team and the project’s ability to launch and succeed in its intended market. There is even a term for this called analysis paralysis, which is a difficulty in arriving at a decision due to overthinking and needlessly complicating a problem.

Chasing the latest tools and technologies can inhibit velocity and the achievement of significant milestones in a project’s development cycle, risking time to market and customer retention. At a certain point an organization needs to define its problems and needs, and then make a decision from the available options, understanding the pros and cons so that it can better anticipate the long-term viability and maintainability of the product.

At Kenzan, our experience has led us to define and coalesce around some key concepts and philosophies that ensure our decisions will help solve the challenges we’ve come to expect from developing software for the front end:

  • Leverage the latest features available in the JavaScript language to support more elegant, consistent, and maintainable source code (like import / export (modules), class, and async/await).

  • Provide a stable and mature local development environment with low-to-no maintenance (that is, no global development dependencies for developers to install or maintain, and intuitive workflows/tasks).

  • Adopt a single package manager to manage front-end and build dependencies.

  • Deploy optimized, feature-based bundles (packaged HTML, CSS, and JS) for smarter, faster distribution and downloads for users. Combined with HTTP/2, large gains can be made here for little investment to greatly improve user experience and performance.

A New Stack

In this series, our focus is on three core components of a front-end development stack. For each component, we’ll look at the tool that we think brings the best balance of dependability, productivity, and maintainability to modern JavaScript application development, and that are best aligned around our desired principals.  

Package Management: Yarn

The challenge of how to manage and install external vendor or internal packages in a dependable and consistently-reproducible way is critical to the workflow of a developer. It’s also critical for maintaining a CI/CD (continuous integration/continuous delivery) pipeline. But, which package manager do you choose given all the great options available to evaluate? npm? jspm? Bower? CDN? Or do you just copy and paste from the web and commit to version control?    

Our first article will look at Yarn and how it focuses on being fast and providing stable builds. Yarn accomplishes this by ensuring the version of a vendor dependency installed today will be the exact same version installed by a developer next week. It is imperative that this process is frictionless and reliable, distributed and at scale, because any downtime prevents developers from being able to code or deploy their applications. Yarn aims to address these concerns by providing a fast, reliable alternative to the npm cli for managing dependencies, while continuing to leverage the npm registry as the host for public Node packages. Plus it’s backed by Facebook, an organization that has scale in mind when developing their tooling.

Application Bundling: webpack™

The orchestration of building a front-end application, which is typically comprised of a mix of HTML, CSS, and JS, as well as binary formats like images and fonts, can be tricky to maintain and even more challenging to orchestrate. So how does one turn a code base into an optimized, deployable artifact? Gulp? Grunt? Browserify? Rollup? SystemJS? All of these are great options that provide their own strengths and weaknesses, but we need to make sure the choice reflects our intended principals we discussed above.

webpack is a build tool specifically designed to package and deploy web applications comprised of any kind of potential assets (HTML, CSS, JS, images, fonts, and so on) into an optimized payload to deliver to users. We want to take advantage of the latest language features like import/export and class to make our code future-facing and clean, while letting the tooling orchestrate the bundling of our code such that it is optimized for both the browser and the user. webpack can do just that, and more!

Language Specification: TypeScript

Writing clean code in and of itself is always a challenge. JavaScript, which is a dynamic language and loosely typed, has afforded developers a medium to implement a wide range of design patterns and conventions. Now, with the latest JavaScript specification, we see more solid patterns from the programming community making their way into the language. Support for features like the use of import/export and class have brought a fundamental paradigm shift to how a JavaScript application can be developed, and can help ensure that code is easier to write, read, and maintain. However, there is still a gap in the language that generally begins to impact applications as they grow: maintainability and integrity of the source code, and predictability of the system (the application state at runtime).

TypeScript is superset of JavaScript that adds type safety, access modifiers (private and public), and newer features from the next JavaScript specification. The security in a more strictly typed language can help promote and then enforce architectural design patterns by using a transpiler to validate code before it even gets to the browser, which helps to reduce developer cycle time while also being self-documenting. This is particularly advantageous because, as applications grow and change happens within the codebase, TypeScript can help keep regressions in check while adding clarity and confidence to the code base. IDE integration is also a huge win here as well.

What About Front-End Frameworks?

As you may have noticed, so far we’ve intentionally avoided recommending a front-end framework or library like Angular or React, so let’s address that now.

Different applications call for different approaches to their development based on many factors like team experience, scope and size, organizational preference, and familiarity with concepts like reactive or functional programming. At Kenzan, we believe evaluating and choosing any ES2015/TypeScript compatible library or framework, be it Angular 2 or React, should be based on characteristics specific to the given situation.  

If we revisit our illustration from earlier, we can see a new stack take form that provides flexibility in choosing front-end frameworks.

A modern stack that offers flexibility in front-end frameworks

Below this upper “view” layer is a common ground that can be built upon by leveraging tools that embrace our key principles. At Kenzan, we feel that this stack converges on a space that captures the needs of both user and developer experience. This yields results that can benefit any team or application, large or small. It is important to remember that the tools presented here are intended for a specific type of project development (front-end UI application), and that this is not intended to be a one-size-fits-all endorsement. Discretion, judgement, and the needs of the team should be the prominent decision-making factors.

What’s Next

So far, we’ve looked back at how the JavaScript renaissance of the last few years has led to a rapidly-maturing JavaScript ecosystem. We laid out the core philosophies that have helped us to meet the challenges and opportunities of developing software for the front end. And we outlined three main components of a modern front-end development stack. Throughout the rest of this series, we’ll dive deeper into each of these components. Our hope is that, by the end, you’ll be in a better position to evaluate the infrastructure you need for your front-end applications.

We also hope that you’ll recognize the value of the tools we present as being guided by a set of core principles, paradigms, and philosophies. Writing this series has certainly caused us to put our own experience and process under the microscope, and to solidify our rationale when it comes to front-end tooling. Hopefully, you’ll enjoy what we’ve discovered, and we welcome any thoughts, questions, or feedback you may have.

Next up in our blog series, we’ll take a closer look at the first core component of our front-end stack—package management with Yarn.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

Grunt, jQuery, and webpack are trademarks of the JS Foundation.



via LXer Linux News

July 18, 2017 at 03:35AM

Linux df Command Tutorial for Beginners (8 Examples)

Linux df Command Tutorial for Beginners (8 Examples)

Linux df Command Tutorial for Beginners (8 Examples)

Sometimes, you might want to know how much space is consumed (and how much is free) on a particular file system on your Linux machine. There a specific command – dubbed df – that does this for you. In this tutorial, we will discuss the basics of this command, as well as some of the major features it offers.

But before we do that, it’s worth mentioning that all examples and instructions mentioned in the article have been tested on Ubuntu 16.04 LTS.

Linux df command

Here’s the syntax of this tool:

df [OPTION]… [FILE]…

And here’s how the man page describes the command:

df displays the amount of disk space available on the file system containing each file name  
argument. If no file name is given, the space available on all currently mounted file systems
is shown.

The following Q&A-type examples should give you a better idea on how this command line utility works.

Q1. How to make df display disk usage of file system containing a specific file?

Suppose you have a file (say, file1), and the requirement is to display the available or used space on the file system that contains this file. This is how you can do this:

df <filename>

Here’s an example:

make df display disk usage of file system containing a specific file

Q2. How to make df display disk usage of all file systems?

In case you want the tool to display disk usage information for all file systems, all you need to do is to run the following command:


Here’s the output of the command in my case:

make df display disk usage of all file systems

Q3. How to make df display usage information in human-readable form?

If the requirement is to make df display disk usage information in human-readable form, then you can use the -h option in that case.

df -h

Here’s an example:

make df display usage information in human-readable form

Observe the letters ‘G’ and ‘M’ that represent Gigabytes and Megabytes, making it easy for users to read these size figures.

Q4. How to make df display inode information instead of block usage?

If instead of block usage, you want df to display inode information in output, then this can be done using the -i command line option.

df -i

make df display inode information instead of block usage

Observe that the second, third, and fourth columns now display inode-related figures.

To produce total for size, used, and avail columns, use the –total option.

df –total

make df produce total of all block-related info

Observe that a new row gets added at the bottom that displays total values.

Q6. How to make df print file system type in output?

While df produces file system name by default, you can force it to display the corresponding type as well, something which can be done through the -T option.

df -T

make df print file system type in output

A new, second column (Type) is where the type-related information is displayed

Q7. How to limit df output to file systems of particular type?

You can also limit the output of the df command to a particular type of filesystems. This can be done using the -t option, which requires you to enter the file system name as its value.

df -t <filesystem-name>

Following is an example:

limit df output to file systems of particular type

Q8. How make df exclude a particular file system type?

Similar to the way you include, you can also make df exclude a particular type of file system in its output. The command line option for this is -x.

df -x <filesystem-name>

Following is an example:

make df exclude a particular file system type

So you can see that no entry for the tmpfs file system was produced in the output.


Clearly, df isn’t a difficult tool to understand and use, primarily because majority of its command line options are targeted at customizing the way the tool’s output is produced. We’ve covered many of the important options here. Once you are done practicing them, head to the command’s man page to know more about it.



via Howtoforge Linux Howtos und Tutorials

July 18, 2017 at 12:22PM

Stats Say Linux Marketshare Hit All-Time High Last Month

Stats Say Linux Marketshare Hit All-Time High Last Month

linux marketshare june 2017Desktop Linux marketshare hit an all-time high last month, according to the latest data from web analytics firm NetMarketShare.

The company report that Linux users made up 2.36% of tracked visits to websites it tracks last month, the highest the Linux figure has ever been.

Not that this uptick is surprising. It continues a trend we’ve seen over the past 12 months which has (more of less) seen Linux usage rank firmly above the 2% line on NetMarketShare.

More impressively, the latest figures means Linux usage is roughly a third of that of Apple macOS, which sits at 6.12% in the June 2017 rankings, down on the previous month.

The combined flavors of Microsoft Windows (who else?) continue to eat up the lion’s share of the desktop operating system chart, falling to 91.51% during the same period.

Caution: Caution Advised

‘As we always say when we present stats like this: take ’em with a large pinch of Na. ‘

As we always say when we present stats like this: make sure you take ’em with a large pinch of Na.


It’s because statistics, numbers and reporting methods not only vary between competing analytics companies but are also open to interpretation, debate and potential errors.

Furthermore NetMarketShare accrues its data based on visits to a mere 40,000 websites globally. While 40,000 is a largish sample size it’s also ludicrously small when compared against the number of websites that are out there!

Finally, while the Linux figure reported does exclude Android/Linux it does include ChromeOS/Linux in addition to GNU/Linux, leading some to attribute the rise in Linux marketshare to Google’s Chrome OS.



via OMG! Ubuntu!

July 3, 2017 at 07:12PM

Linux is Running on Almost All of the Top 500 Supercomputers

Linux is Running on Almost All of the Top 500 Supercomputers

Linux is Running on Almost All of the Top 500 Supercomputers

Linux rules supercomputers

Brief: Linux may not have a decent market share in desktop, but it rules the supercomputers with 498 out of the top 500 supercomputers running on Linux.

Linux is still running on more than 99% of the top 500 fastest supercomputers in the world. Same as last year, 498 out of top 500 supercomputers run Linux while remaining 2 run Unix.

No supercomputer dared to run Windows (pun intended). And of course, no supercomputer runs macOS because Apple has not manufactured the ‘iSupercomputer’ yet.

This information is collected by an independent organization Top500 that publishes the details about the top 500 fastest supercomputers known to them, twice a year. You can go the website and filter out the list based on country, OS type used, vendors etc.Don’t worry, I’ll do it for you to present some of the most interesting facts from this year’s list.

No worries if you don’t want to do that because I’ll present some of the interesting facts here.

Linux rules supercomputers because it is open source

20 years back, most of the supercomputers ran Unix. But eventually, Linux took the lead and become the preferred choice of operating system for the supercomputers.

Growth of Linux on SupercomputersGrowth of Linux on Supercomputers. Image credit: ZDNet

The main reason for this growth is the open source nature of Linux. Supercomputers are specific devices built for specific purposes. This requires a custom operating system optimized for those specific needs.

Unix, being a closed source and propriety operating system, is an expensive deal when it comes to customization. Linux, on the other hand, is free and easier to customize. Engineering teams can easily customize a Linux-based operating system for each of the supercomputers.

However, I wonder why open source variants such as FreeBSD failed to gain popularity on supercomputers.

To summarize the list of top 500 supercomputers based on OS this year:

  • Linux: 498
  • Unix: 2
  • Windows: 0
  • MacOS: 0

To give you a year wise summary of Linux shares on the top 500 supercomputers:

  • In 2012: 94%
  • In 2013: 95%
  • In 2014: 97%
  • In 2015: 97.2%
  • In 2016: 99.6%
  • In 2017: 99.6
  • In 2018: ???

The only two supercomputers running Unix are ranked 493rd and 494th:

Top 500 Supercomputers running UnixSupercomputers running Unix

Some other interesting stats about fastest supercomputers

Top 10 Fastest Supercomputers in 2017Top 10 Fastest Supercomputers in 2017

Moving Linux aside, here are some other interesting stats about supercomputers this year:

  • World’s fastest supercomputer Sunway TaihuLight is based in National Supercomputing Center in Wuxi, China. It has a speed of 93PFLOPS.
  • World’s second fastest supercomputer is also based in China (Tianhe-2) while the third spot is taken by Switzerland-based Piz Daint.
  • Out of the top 10 fastest supercomputers, USA has 5, Japan and China have 2 each while Switzerland has 1.
  • United Staes leads with 168 supercomputers in the list followed by China with 160 supercomputers.
  • Japan has 33, Germany has 28, France has 18, Saudi Arabia has 6, India has 4 and Russia has 3 supercomputers in the list.

Some interesting facts, isn’t it? You can filter out your own list here to further details.

While you are reading it, do share this article on social media. It’s an achievement for Linux and we got to show off 😀



via LXer Linux News

June 27, 2017 at 05:01AM

CRM: The Same Difference for All Businesses

A Customer Relationship Manager is the most significant, and in my opinion, the most underutilized business tool. Practically overlooked by small businesses, a CRM should be what most small businesses invest in first, well before business cards, office space, or even a Web site. Having a good CRM can help you visualize, plan, execute, and fine tune your sales process, business workflow, communications, marketing and customer service – all from one place. Being able to do all of that, as a small business owner, will make you all that more organized when you’re ready to open your doors, real or virtual.

Continue reading “CRM: The Same Difference for All Businesses”

A quick guide to using FFmpeg to convert media files

A quick guide to using FFmpeg to convert media files

A quick guide to using FFmpeg to convert media files

There are many open source tools out there for editing, tweaking, and converting multimedia into exactly what you need. Tools like Audacity or Handbrake are fantastic, but sometimes you just want to change a file from one format into another quickly. Enter FFmpeg.

read more




June 5, 2017 at 03:14AM

How to configure Nginx SSL/TLS passthrough with TCP load balancing

How to configure Nginx SSL/TLS passthrough with TCP load balancing

How do I configure SSL/TLS pass through on Nginx load balancer running on Linux or Unix-like system? How do I load balance TCP traffic and setup SSL Passthrough to pass SSL traffic received at the load balancer onto the backend web servers?



via [RSS/Feed] nixCraft: Linux Tips, Hacks, Tutorials, And Ideas In Blog Format

June 6, 2017 at 10:03AM