Posts

Showing posts from 2014

The Library of Monitoring Objects, part 2

Image
To process data from different industries and businesses we need to have a way to define and describe such data in a manner the analytic software will understand it. Kronometrix uses a simple and powerful concept for object definition, called the library of monitoring objects . Industries Suppose we want to handle data coming from Information Technology or Healthcare. Within these domains, we might have different sub-domains describing different types of analytic business. See  how different domains and sub-domains map to LMO: The Library of Monitoring Objects For each industry we are interested in we need to develop and write a simple formal definition how data will be expected from devices and sensors from that field. Example: We plan to gather information regarding Information Technology, System Performance domain. For that we need to define and describe all messages expected from this field. Example computer_performance.it.json filename. This file describes a

Raspberry Pi and Redis

Image
This summer we did something amazing: we took our enterprise appliance, a very powerful server and we tried to run same software, same analytic kernel and all the other things in something smaller and lot simpler, like this: Light, mobile appliance And we did it ! Being powered by OpenResty and Lua we were able to size our appliance easily and be up and running quickly. Low factor, low power consumption, very cheap, these ARM based devices are getting more and more attention and become more popular nowadays: Banana Pi - A Highend Single-Board Computer 4 x Raspberry Pi + Arduino = UDOO What ? Analytics on a Raspberry PI ... !? Yep. First of all, we wanted to experiment with ARM based boards using few number of hosts and few data messages per host as input. The immediate trouble was that we should fit in 512MB RAM with all our software and customer data. Then these boards don't run on latest Intel specs but something like 700 Mhz CPU as a system on chip wi

Monitor your computer performance, the weather ...

Image
Our new Web Real-Time Analytic software has been re-written from Perl to Lua.  Nothing wrong with Perl, but we found Lua and NGINX amazing in terms of speed, stability and system utilization. So with our new analytic software came a new authentication and authorization layer, written of course in Lua. Since we been talking a lot to allow different types of data to our analytic, we did implement and design new types of subscriptions, where people can open and send data feeds to our analytic nice and easy. Like, say you are a meteorological institute and you have around 100 weather stations what you want to analyze with Kronometrix. You can easily configure your stations to send HTTP data to our analytic, open a subscription for your correct type of data analysis and you are all set. Subscriptions types: Aviation Weather Data  - designed for aviation weather analysis, gathers data from weather stations and aggregates data for aviation Weather and Climate data  - designed for cli

The Appliance

Image
Our next version of  analytic software  for computer performance data will have a totally new architecture and design, to support large or small customer accounts, using same stock base software. From Perl to Lua, from a time series data store to a very powerful in cache data structure server, all these changes were made to support more devices, efficiently use the computing power and deliver value. So what we did: we dropped Perl and FastCGI we dropped RRDtool we started to use  Lua  programming language we moved to  OpenResty  for a fast and robust HTTP processing we switched to  Redis  for in cache memory statistics and we are still designing a new exploratory data module, for direct interaction with raw data And the results are amazing. We have a more powerful architecture which lets us build nice ready data appliances for large data-center customers and small and medium sized businesses. We are using the Data-Driven Document JS library, the D3, a fresh and powerful

The Library of Monitoring Objects, part 1

Image
Suppose you plan to collect data from one or many computer systems you have in your data center. What data would you collect and what  summary statistics  would you store for those metrics ? Would it be ok, to sample every 60 seconds data from each host, or would you need to sample at second level ? How about keeping all metrics as statistics, aggregated over time or would you want to keep as well the raw data associated with those statistics !? All these questions need to be answered when you conduct a performance monitoring setup to a site. Metrics So, what metrics do you need ? You have lots of different systems, RedHat Enterprise Linux 5.x and 6.x, Solaris 10, 11, lots of Windows servers. So how do you know what metrics are useful and what are simple waste ... and how would you be able to understand and explain what is going on with your computing infrastructure ? Short answer:  it depends what you plan to do . If you want to be able to monitor all hosts for their availa

Facts about Raw Data

Image
Dr. Rufus Pollock, founder and co-Director of the  Open Knowledge Foundation  said about raw data and fancy GUIs: " one thing I find remarkable about many data projects is how much effort goes into developing a shiny front-end for the material. Now I’m not knocking shiny front-ends, they’re important for providing a way for many users to get at the material  ...  think what a website designed five years ago looks like today (hello css). Then think about what will happen to that nifty ajax+css work you’ve just done. By contrast ascii text, csv files and plain old sql dumps (at least if done with some respect for the ascii standard) don’t date — they remain forever in style." Amen to that.  Nothing will compete with the simplicity of storing raw data as CSV files. I'm amazed to see how complex and complicated nowadays business are and how enormous amount of money people pour in to maintain such complex systems. But what people know in fact about raw data ? Are yo

Enter OpenResty

Wonder if your applications will scale as more users will come and visit your site ? How about system's resources: CPU, Memory ? Probable you will need to add more and more capacity every 4-6 months !? And how quickly can you add or change things to your web application platform to keep up with the competition ? You need something: fast, simple, easy to manage, simple to learn and develop.  Enter OpenResty What is it ? A web development platform based on Lua and NGINX HTTP server, including various modules to speed up the web development. Think of it as a web application server with lots of ready modules to help your life. But it is not Java, PHP, Perl or Ruby. Its Lua. "By taking advantage of various well-designed Nginx modules, OpenResty effectively turns the nginx server into a powerful web app server, in which the web developers can use the Lua programming language to script various existing nginx C modules and Lua modules and construct extremely high-performance we

FreeBSD 11 spellchecker packages

Image
Missing spell checking on FreeBSD 10 or 11 ? Are you using LibreOffice or Sylpheed and can't spell check your emails or documents ? Keep reading ... You need to have additional packages in order spell checking will function correctly for your application. Simple fix Make sure you have installed all required packages:  textproc/en-hunspell ,  textproc/en-aspell Packages From my running FreeBSD 11 system, these are all needed packages to have proper spell checking in place, for LibreOffice, email client or any other editor or using command line utilities, like aspell for your document files. $ pkg info | grep spell aspell-0.60.6.1_4 Spelling checker with better suggestion logic than ispell aspell-ispell-0.60.6.1 Ispell compatibility script for aspell en-aspell-7.1.0_1 Aspell English dictionaries en-hunspell-7.1_1 English hunspell dictionaries enchant-1.6.0_3 Dictionary/spellchecking framework gtkspell-2.0.16_5 GTK+ 2 spell checking component gtkspell-reference-2

Asus Zenbook and FreeBSD 11

Image
This is a short description how I got running FreeBSD 11-current on my Asus Zenbook UX32VD laptop. Im very happy with the current setup but of course it is room for improvements in many areas. Having DTrace, ZFS and the other goodies makes FreeBSD a real good candidate for a mobile environment. Last Updated: 16 March 2015 Zenbook UX32VD Configuration CPU: Intel(R) Core(TM) i7-3517U CPU @ 1.90GHz Memory: 6 GB RAM Storage: SanDisk SSD i100 24GB, Samsung SSD 840 PRO 500GB Video: Intel HD Graphics 4000, Nvidia GT620M Display: 13" IPS Full HD anti-glare Wifi: Intel Centrino Advanced-N 6235 3 x USB 3.0 port(s)  1 x HDMI  1 x Mini VGA Port  1 x SD card reader Note: The laptop's internal hard drive, has been replaced by a Samsung SSD.

FreeBSD 10 - ZFS, DTrace welcome back

For our internal testing and product development at  SDR Dynamics , we are using FreeBSD 10. And DTrace is there along with ZFS, works out of the box, nothing to add or recompile. Nice thing to have these ported to BSD world from Solaris. I did a quick update of my laptop, Asus Zenbook UX32VD, to latest FreeBSD current, 11 version, very curious to play around with dtrace. root@nereid:~ # uname -a FreeBSD nereid 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r265628: Thu May  8 05:26:05 UTC 2014     root@grind.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64   List all probes: root@nereid:~ # dtrace -l | wc -l    56748 Syscalls by application name: root@nereid:~ #  dtrace -n 'syscall:::entry { @[execname] = count(); }' dtrace: description 'syscall:::entry ' matched 1072 probes ^C   wpa_supplicant                                                    1   gvfsd-trash                                                       3   syslogd                                        

xenrec

Image
SystemDataRecorder  is offering several data recorders for different jobs: overall system utilization, per-CPU, per-NIC utilization along with many others. On systems where we use virtualization in general we can monitor the guests directly or if we want more accurate numbers, we will need to monitor the host. The purpose of this short article is to show how you can use SystemDataRecorder to record Xen performance metrics. Xen Hypervisor Xen is an open-source type-1 or baremetal hypervisor which has the following structure:  The Xen Hypervisor is an exceptionally lean, less than 150,000 lines of code, software layer that runs directly on the hardware and is responsible for managing CPU, memory, and interrupts. It is the first program running after the boot-loader exits. The hypervisor itself has no knowledge of I/O functions such as networking and storage. Xen dom0 The Control Domain, or Domain 0, is a specialized Virtual Machine that has special privileges like the

cpuplayer - multiprocessor player

cpuplayer - multiprocessor player Problem solving is a very important skill for any System Administrator, Performance Analyst or even for a System Manager. Sometimes, you try to solve a problem by building a visual model of that problem and trying to see it. But can geometry, in general, help in understanding how some workload is executed on a 72CPU server ? It seems it can. Welcome  Problem solving and Computer Graphics , a land where geometry meets performance analysis, troubleshooting, problem solving or even capacity planning. Using the power of geometric figures we can build a model of our original problem, we can simulate the conditions and we can see the results letting the computer to do all the work for us in a graphical representation - easy to be digested by us.  cpuplayer  is such tool, which combines  problem solving ,  geometry ,  performance analysis  in one thing. Using Barycentric coordinates, the player displays the CPU transition states from IDLE to USER or SY