Archive
Highlighted

How to schedule SOS dashboard view?

Path Finder

When I run the SOS dashboard Disk Usage from <URI:8000>/app/sos/indexdiskusage as a search, I get all the data and unable to get the dashboard view as seen in SOS. Is there any way to schedule the dashboard view of SOS directly?

Highlighted

Re: How to schedule SOS dashboard view?

Motivator

You can view details, copy the search and create it in search app. You have to allow access to the sos index/objects to the role/app as well. I have copied the search activity reports and indexer reports from sos, created one in search app and scheduled it.

Thanks,
Raghav

0 Karma
Highlighted

Re: How to schedule SOS dashboard view?

Path Finder

I was able to get the search results, but I need exact view in SOS app.

0 Karma
Highlighted

Re: How to schedule SOS dashboard view?

SplunkTrust
SplunkTrust

Maybe you can schedule as PDF delivery. You're never going to "see" the exact view in email notification.

There exists something known as PhantomJS which is a lightweight browser for coding screenshots of websites, etc.

We used it to pull exact replicas of dashboards before:

Here's what lead us to phantomjs. Note he's talking about taking screenshots of kibana not splunk, but you can use the same concept. PhantomJS is available for windows and linux:

http://www.ragingcomputer.com/2014/03/kibana3-automated-email-reports-using-windows

Here's a python script for pulling report schedules from an elasticsearch db and using the data to pull a screenshot of a kibana dashboard. It's not for splunk, but perhaps it will give you some ideas.

#!/usr/bin/python
# pdfScheduler.py
# requires python-crontab "pip install python-crontab"
# requires elasticsearch "pip install elasticsearch" 

from crontab import CronTab
import requests
from elasticsearch import Elasticsearch
import elasticsearch.helpers
import json
import subprocess

##############GLOBALS###############

esURI = "http://localhost:9200"
es = Elasticsearch(esURI)
reportIndex = "reporting"

##############FUNCTIONS#############

def getESHealth(uri):
    ''' 
    expects: uri to be http://host:port/_cat/health
    returns: True if cluster health is green, False if else
    required: requests, 'import requests' but we dont add the import to the function for performance reasons
    '''
    try:
        if requests.get(uri).text.split(' ')[3] == "green":
            return True
        else:
            return False
    except OSError as e:
        print(e)
        exit(1)

def getReports(esObj,elasticUri,index="reporting",doc_type="config"):
    '''
    expects: elasticURI to be http://host:port
    optional: elasticsearch index name as string and elasticsearch doc_type as string
    defaults: index="reporting, doc_type="config"
    required: requests, Elasticsearch & elasticsearch.helpers 'import requests, ... etc ... ' but we dont add the import to the function for performance reasons
    '''
    try:
        '''
        requires 'from elasticsearch import Elasticsearch'
        '''
        uri = elasticUri + "/" + index
        '''
        requires 'import requests'
        '''
        if requests.get(uri).status_code == 200: 
            if requests.get(uri + '/_search/exists?q=*').status_code == 200:
                '''
                requires 'import elasticsearch.helpers'
                '''
                for report in elasticsearch.helpers.scan(esObj, query='{"fields": ["_id","enabled","schedule","Email","URI","name","customerID"]}', doc_type=doc_type, index=index, scroll='10s'):
                    yield report
            else: 
                print("No reports found in " + uri + '/_search/exists?q=*')
                exit(10)
        else: 
            print("Elasticsearch Index not found at " + uri)
            exit(11)
    except OSError as e:
        print(e)
        exit(12)


'''
CODE SUMMARY:
CODE EXECUTES (assumed by cron)--> CHECKS CLUSTER HEALTH AND EXITS IF NOT GREEN-->CHECKS ELASTICSEARCH FOR REPORTS-->
REPORTS ARE RETURNED VIA ITERATION-->(IF NO REPORTS RETURNED, EXIT)-->ELSE FOREACH REPORT RETURNED--> Create Cron Job  for reporting
'''
##############EXECUTION#############
try:
    if 'cron' in locals(): #for multiple execution within same python shell
        cron.remove_all()
    if True == getESHealth(esURI + '/_cat/health'):
        # delete runas user's cront
        subprocess.call(['/usr/bin/crontab','-r'])
        # create cron object using user's crontab
        cron = CronTab(user=True)
        for report in getReports(es,esURI):
            comment = json.dumps(report['_id'])
            comment = comment.strip('"')
            command = '/usr/bin/phantomjs --ignore-ssl-errors=yes "/opt/reporting/kibanapdf.js" "' + json.dumps(report['fields']['URI']).strip('"[]') + '" "/opt/reporting/overview.pdf" letter'
            schedule = json.dumps(report['fields']['schedule']).strip('"[]')
            newJob = cron.new(command=command,comment=comment)
            newJob.setall(schedule)
            if True == newJob.is_valid():
                newJob.enable()
            else:
                print("job not valid")
        cron.write()
    else:
        print("bad cluster health")
        exit(100)
except OSError as e:
    print(e)
    exit(101)

#adding data that is consumed by this script
#curl -XPUT 'localhost:9601/reporting/config/4' -d '{"name" : "Test Report", "enabled": null, "schedule" : "0 0 * * *", "TimeDiff" : -5, "Email" : "foo@bar.com", "URI" : "http://localhost:9200/#/dashboard/A-dashboard", "customerID" : "1"}'
0 Karma