Hello,
I have a challenge where I'd like to export the following data from a deployment server to a file that can be ingested with a inputs.conf monitor stanza so that I can provide my team with visibility into new deployment server clients.
In our case, we have an app inclusive of client scope *
deploying a deploymentclient.conf
. Therefore, any client that has an application count of 1
means that it can be considered "unmanaged" and needs remediation.
Since we are utilizing splunkcloud, I run an on-site deployment server that I can better regulate than the SH instances managed in the cloud.
The technicals of the challenge:
I could use outputcsv
, and perform field extraction, but I feel like that's complex. What I was hoping to do was useoutputcsv usexml=true
. But, because I want to make the system accessible in a semi-centralized fashion, I wish to schedule the query as an alert to run daily. It explicitly states in the docs on outputcsv
that usexml
is not valid when used in the "UI." This is quite clear given that the results... are not entered in XML, but, as expected, in CSV.
So, I'm sort of stuck as to how to get this data fired into the ingestion pipeline to be indexed with fields extracted. I know that splunkd
would automatically perform index time field extraction on properly formatted XML, as well as lines of text in a "key=value" format.
Oh... you just came here for the query and are going to run it with cron
using the splunk search ...
command invocation?
| rest /services/deployment/server/clients splunk_server=local
| eval hostinfo=hostname."#".ip."#".utsname
| table hostinfo applications*.stateOnClient
| untable hostinfo applications value
| eval applications=replace(applications,"applications\.(\w+)\.stateOnClient","\1")
| search
[| rest /services/deployment/server/clients splunk_server=local
| eval hostinfo=hostname."#".ip."#".utsname
| table hostinfo applications*.stateOnClient
| untable hostinfo applications value
| eval applications=replace(applications,"applications\.(\w+)\.stateOnClient","\1")
| stats count(value) as ct by hostinfo
| search ct=1
| fields hostinfo ]
| rex field=hostinfo "^(?<hostname>.*)#(?<ip>.*)#(?<uts>.*)$"
| fields - _*
| fields hostname ip uts applications value | outputcsv singlefile=true create_empty=true usexml=true newish_deployment_clients
I included ip
and uts
to provide some context for which server-classes to add the client.
I guess I just revealed that I feel like it probably will work if I invoke from CLI with splunk search
, I just don't want to do that so if I get hit by the proverbial bus, a layman can understand how to resolve any issues within the web UI on the deployment server.
Fine. I'll try it anyway.
But I'd really like to know for this use case, as well as others that may come up, how to save search output to the file system of a heavy forwarder or SH or deployment server via UI driven search?
If anyone can assist, I'd appreciate it.
Thanks,
Matt
Hi mbrownoutside,
I did something like you to display Deployment Servers configurations (we had 2 DSs).
I scheduled an dayly alert on each DS with outputcsv at the end of the search that was writing on a file in $SPLUNK_HOME/var/run/splunk/csv and I configured an input to ingest those files.
Probably the most rows of your output.csv will be the same every time, so it's better to insert the date in the outputcsv name (e.g. DS_Monitoring_2019-09-12) and use cscSal=<SOURCE>
option in inputs.conf.
So your search will be:
your_search
| outputcsv [search * | head 1 | eval query="DS_Monitoring_".strftime(now(),"%Y-%m-%d") | fields query | format "" "" "" "" "" ""]
And your inpouts.conf will be:
[monitor:///opt/splunk/var/run/splunk/csv/DS_Monitoring_*]
index=your_index
sourcetype=your_sourcetype
cscSal=<SOURCE>
Bye.
Giuseppe
Yes, so I must configure a field extraction:
Hi mbrownoutside,
I did something like you to display Deployment Servers configurations (we had 2 DSs).
I scheduled an dayly alert on each DS with outputcsv at the end of the search that was writing on a file in $SPLUNK_HOME/var/run/splunk/csv and I configured an input to ingest those files.
Probably the most rows of your output.csv will be the same every time, so it's better to insert the date in the outputcsv name (e.g. DS_Monitoring_2019-09-12) and use cscSal=<SOURCE>
option in inputs.conf.
So your search will be:
your_search
| outputcsv [search * | head 1 | eval query="DS_Monitoring_".strftime(now(),"%Y-%m-%d") | fields query | format "" "" "" "" "" ""]
And your inpouts.conf will be:
[monitor:///opt/splunk/var/run/splunk/csv/DS_Monitoring_*]
index=your_index
sourcetype=your_sourcetype
cscSal=<SOURCE>
Bye.
Giuseppe
Thank you for the additional input. Thanks for pointing out the cscSal
inputs parameter. However, it appears you were answering another concern.
My problem is having the data export in a way that will provide field extraction when cooked as is. Meaning, I will not need to write transforms and props for the sourcetype.
The two ways I know of to do this are to deliver data into a file formatted as:
* XML
* key=value
I can't do this with outputcsv
, unless I totally misunderstand?
If I use outputscsv
, it appears that usexml=true
does not enforce exporting XML; CSV is exported, and when I tail that file for ingestion with inputs.conf/monitor
, the entire file is ingested as a single event and no fields are extracted.
I probably can just lookup field extracting CSVs, and this will solve my problem. However, any specific input on this challenge would be helpful.
Thanks,
Matt
[update]
Yes, apparently I'm being lazy. See my answer.
Sorry, I misunderstood!
I use csv and I have no problems.
Anyway it isn't possible to export in XML using outputcsv, but you could use REST API.
There are useful information at https://answers.splunk.com/answers/33418/export-splunk-results-to-an-xml-output.html and https://answers.splunk.com/answers/13739/output-xml-via-a-custom-search-command.html .
Bye.
Giuseppe
Thank you!
Hi mbrownoutside,,
if my answer satisfied your question, please accept and/or upvote it.
Thank You and see you next time.
Bye.
Giuseppe
Sure. I'm sure splunk, like other vendors requires you to have good standing in "the community" to maintain their stupid gamification society (possibly even affecting your ability to get certain certs). I'll accept your answer no problem!