All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I wish to take a stacked bar chart, use 'addtotals' to create a field representing the cumulative value of the stacked bars, and then overlay a prediction (using predict) of the cumulative value on t... See more...
I wish to take a stacked bar chart, use 'addtotals' to create a field representing the cumulative value of the stacked bars, and then overlay a prediction (using predict) of the cumulative value on the stacked bar chart.  When I try to do this, the stacked bars always revert to side-by-side in the chart.  Here is a look at what I wish to do: some SPL that generates a sample dataset:     | makeresults count=20 | streamstats count AS count | eval _time=now()+(count*43200) | eval n=(random() % 1000000) + 1 | eval load=100*count*n | eval host_name=if(count%2==0,"host1", "host2") | timechart span=1d avg(load) AS load by host_name | addtotals host* fieldname=load_total     If I create a visualization of this, stack the bars, and add the 'load_total' field as an overlay, I end up with something like this:   Now, I wish to add 'predict' to this query to get forecasted values for the cumulative 'load_total' field.  Here is the SPL for that:     | makeresults count=20 | streamstats count AS count | eval _time=now()+(count*43200) | eval n=(random() % 1000000) + 1 | eval load=100*count*n | eval host_name=if(count%2==0,"host1", "host2") | timechart span=1d avg(load) AS load by host_name | addtotals host* fieldname=load_total | predict load_total algorithm=LLT future_timespan=4     Even though the bars are set to be stacked in the format of the chart, they revert to side-by-side. Any ideas on why the behavior is like this or how I can get the stacked bars? I believe what I am trying to do is the same as in this post from @renuka , but I wanted to add a little more information. Thanks, Reid
Hi  My issue is that I have txt file to index entirely each time it is modified (adding or suppression).  At this time, it is indexing only new lines.  My txt file example : id,name,app,env,start... See more...
Hi  My issue is that I have txt file to index entirely each time it is modified (adding or suppression).  At this time, it is indexing only new lines.  My txt file example : id,name,app,env,start,end 1234,test,splunk_app,dev,29-12-2020 15:00,29-12-2020 16:00 5678,test2,splunk_app2,dev,29-12-2020 15:00,29-12-2020 16:00 My inputs.conf : [monitor:///opt/splunk/etc/apps/<app>/bin/file.txt] index = test sourcetype = st disabled = 0 crcSalt = <SOURCE> initCrcLength = 2000 My props.conf : [st] SHOULD_LINEMERGE = false DATETIME_CONFIG = CURRENT FIELD_DELIMITER=, HEADER_FIELD_DELIMITER=, FIELD_QUOTE=" Can you tell me how to do that ?  Thanks!
Hi all, somewhat new to Splunk, hopefully I'll describe the issue well... I've setup a deployment app to forward logs from an (non-splunk) application for which there is no current app on Splunkbase... See more...
Hi all, somewhat new to Splunk, hopefully I'll describe the issue well... I've setup a deployment app to forward logs from an (non-splunk) application for which there is no current app on Splunkbase. I configured the index and log file locatiosn to go to the index oempi. I configured a specific server class for these servers. I also have a server class for Splunk App for *nix which has these servers as well as other Linux servers used for other purposes. That app is configured to send data to the os index. For some reason I'm getting all the data from the Splunk App for *nix in the oempi index  as well as the os index. Below are the settings for the two apps:       head Splunk_TA_nix/local/inputs.conf # Copyright (C) 2020 Splunk Inc. All Rights Reserved. [default] index = os [script://./bin/vmstat.sh] interval = 60 sourcetype = vmstat source = vmstat disabled = 1           head OpenEMPI/local/inputs.conf [default] index = oempi [monitor:///sysnet/openempi/openempi-3.5.7/logs] whitelist=(\.log|\.out|\.txt) disabled = 0 [monitor:///sysnet/openempi/openempi-3.5.7/openempi-entity-3.5.7/logs] whitelist=(\.log) disabled = 0    
We have been getting messages about high percentage of small buckets.  I set logging to DEBUG on one of our indexers (Windows, Splunk 7.3.4, index cluster).  Using this SPL:  index=_internal sourcety... See more...
We have been getting messages about high percentage of small buckets.  I set logging to DEBUG on one of our indexers (Windows, Splunk 7.3.4, index cluster).  Using this SPL:  index=_internal sourcetype=splunkd component=HotBucketRoller "finished moving" to try and see why buckets are rolling. Looking at Interesting Fields, "caller" seems to indicate the reason for the bucket roll; however, only 2 of the 4 reasons make sense to me, and I can't find them documented anywhere.  The values I return are: size_exceeded, bucket_replication_failed  lru, marked  The first 2 are self-evident, but what are the last 2? I'm mostly interested in LRU, as that makes up 30-40% of our buckets rolls. Any insight on this? My Google-fu has failed.    
Hi   Recently I installed Splunk App for AWS. How to export AWS device Inventory list from Splunk App
I downloaded Splunk Enterprise to be able to follow along with it for Udemy; however, I don't want to have to pay for it once the trial expires. How can I be sure that I won't get billed for it when... See more...
I downloaded Splunk Enterprise to be able to follow along with it for Udemy; however, I don't want to have to pay for it once the trial expires. How can I be sure that I won't get billed for it when my trial expires? I'm just asking this as it's possible to get billed for stuff such as Netflix if you don't cancel it. If I remember correctly, I didn't put any credit card details, but I can't figure out as the interface is confusing. I'm asking as I don't want to get billed for something I don't intend to use. I did convert to the free version but I want to be sure I didn't connect my credit card or anything.
One of the episode in my setup have not generated and i want to see the correlation search that run at that time. Is that possible, if Yes HOW?
I have Splunk 8.0.7 in core and Splunk Machine Learning Toolkit in version 5.2.0.  Splunk backend is still running Python 2.7.17.  How can I tell what Python version is running at Splunk MLTK App?  A... See more...
I have Splunk 8.0.7 in core and Splunk Machine Learning Toolkit in version 5.2.0.  Splunk backend is still running Python 2.7.17.  How can I tell what Python version is running at Splunk MLTK App?  Also, I followed the following post, How to run Splunk using python 3.7, I either added "python.version = force_python3" or "python.version = python3", but Splunk is still running Python 2.7 from the backend.  Any ideas? Thanks.
Hi,  I have a script which output a csv file :  id,name,env,start,end 1234,test,[env]:dev,2020-12-31 18:07,2020-12-31 19:07 The first line is the header and ALL lines in file must be reindexed ea... See more...
Hi,  I have a script which output a csv file :  id,name,env,start,end 1234,test,[env]:dev,2020-12-31 18:07,2020-12-31 19:07 The first line is the header and ALL lines in file must be reindexed each time there is a modification (adding or a suppression).  I have an inputs.conf like this :  [monitor:///opt/splunk/etc/apps/my_app/bin/my_csv.csv] index = test sourcetype = st disabled = 0 initCrcLength = 3000 I have a props.conf like :  [st] DATETIME_CONFIG = INDEXED_EXTRACTIONS = csv KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true The indexation is really weird : it actually index only the header : Can you help me please ? Thank a lot.  
hi, scenario: elk-server has logfiles of test und prod systems. we have two separate splunk-infrastructures, test & prod. goal: send events, based on 'source', to test-splunk-infra or prod-splunk-i... See more...
hi, scenario: elk-server has logfiles of test und prod systems. we have two separate splunk-infrastructures, test & prod. goal: send events, based on 'source', to test-splunk-infra or prod-splunk-infra.   thanks for your input, florian.
Can someone assist with the command to access the Rest APIs of the SAAS controller using curl command? I want to do this through the corporate proxy This is what I have so far but I am facing an Aut... See more...
Can someone assist with the command to access the Rest APIs of the SAAS controller using curl command? I want to do this through the corporate proxy This is what I have so far but I am facing an Authentication 401 Unauthorized Error curl -v --proxy http://proxyusername:password@proxyaddress:proxyport --user username@account:password https://SAASURL/controller/rest/applications Is there something I am missing? The username@account:password is working outside the corporate proxy.
I need to get the csv file into HF and forward it to an indexer How do i add csv file to HF...Do i need to create stanza in inputs.conf to monitor the file.If pls let me know the stanza.
Hi Team, We have designed a dashboard panel where all the azure identity protection center logs has been enabled, We see Trusted location as "Zscaler". We know very well this is a trusted ip's and l... See more...
Hi Team, We have designed a dashboard panel where all the azure identity protection center logs has been enabled, We see Trusted location as "Zscaler". We know very well this is a trusted ip's and location. So we don't need this Zscaler showing up in our dashboard.  How to supress this not to show up this "Zscaler in dashboard.  Can we do via splunk query ? or using conf file we can supress it? Please help.!  
Given the following events HOST VALUE Host1 1 Host2 4 Host3 2 Host2 7 Host3 5 Host1 8   How do I maintain the latest value for each host to give result like below? ... See more...
Given the following events HOST VALUE Host1 1 Host2 4 Host3 2 Host2 7 Host3 5 Host1 8   How do I maintain the latest value for each host to give result like below? HOST VALUE LATEST Host1 1 Host1-1 Host2 4 Host1-1,Host2-4 Host3 2 Host1-1, Host2-4, Host3-2 Host2 7 Host1-1, Host2-7, Host3-2 Host3 5 Host1-1, Host2-7, Host3-5 Host1 8 Host1-8, Host2-7, Host3-5
hi guys, getting spammed by these messages in splunkd.log. any idea what this is about and how to stop them. have  two separate  but identical environments and both CM splunkd.log getting filled up ... See more...
hi guys, getting spammed by these messages in splunkd.log. any idea what this is about and how to stop them. have  two separate  but identical environments and both CM splunkd.log getting filled up with these messages. Tried restarting whole stack but didn't help. version is 8.1.0 12-30-2020 09:42:54.404 -0500 WARN Fixup - GenCommitFixup::finish error in scheduler sendQueued= 12-30-2020 09:42:54.404 -0500 WARN CMMaster - Unable to send scheduled jobs, err="" 12-30-2020 09:42:54.904 -0500 WARN Fixup - GenCommitFixup::finish error in scheduler sendQueued= 12-30-2020 09:42:54.904 -0500 WARN CMMaster - Unable to send scheduled jobs, err="" 12-30-2020 09:42:55.405 -0500 WARN Fixup - GenCommitFixup::finish error in scheduler sendQueued= 12-30-2020 09:42:55.405 -0500 WARN CMMaster - Unable to send scheduled jobs, err="" 12-30-2020 09:42:55.904 -0500 WARN Fixup - GenCommitFixup::finish error in scheduler sendQueued= 12-30-2020 09:42:55.905 -0500 WARN CMMaster - Unable to send scheduled jobs, err="" 12-30-2020 09:42:56.405 -0500 WARN Fixup - GenCommitFixup::finish error in scheduler sendQueued= 12-30-2020 09:42:56.405 -0500 WARN CMMaster - Unable to send scheduled jobs, err="" 12-30-2020 09:42:56.905 -0500 WARN Fixup - GenCommitFixup::finish error in scheduler sendQueued= 12-30-2020 09:42:56.905 -0500 WARN CMMaster - Unable to send scheduled jobs, err="" 12-30-2020 09:42:57.406 -0500 WARN Fixup - GenCommitFixup::finish error in scheduler sendQueued= 12-30-2020 09:42:57.406 -0500 WARN CMMaster - Unable to send scheduled jobs, err=""
Hi, I have a script on heavy forwarder which output a csv file in /bin  I have an inputs.conf set like this :  [script://./bin/my_script.py] interval = * * * * * index = test sourcetype = test:... See more...
Hi, I have a script on heavy forwarder which output a csv file in /bin  I have an inputs.conf set like this :  [script://./bin/my_script.py] interval = * * * * * index = test sourcetype = test:st disabled = 0 [monitor:///opt/splunk/etc/apps/my_app/bin/csv_file.csv] index = test sourcetype = test:st disabled = 0 crcSalt = <SOURCE> My props.conf : SHOULD_LINEMERGE = false FIELD_DELIMITER=, HEADER_FIELD_LINE_NUMBER = 1 HEADER_FIELD_DELIMITER=, INDEXED_EXTRACTIONS=csv My issue is that my csv is not indexed on splunk. In index=_internal I have this INFO message :  12/30/20 3:35:14.261 PM  INFO TailingProcessor - Adding watch on path: /opt/splunk/etc/apps/my_app/bin/csv_file.csv. host = heavy-forwarder index = _internal log_level = INFO source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd 12/30/20 3:35:14.261 PM 12-30-2020 14:35:14.261 +0000 INFO TailingProcessor - Parsing configuration stanza: monitor:///opt/splunk/etc/apps/my_app/bin/csv_file.csv. INFO ExecProcessor - setting reschedule_ms=59661, for command=python /opt/splunk/etc/apps/my_app/bin/my_script.py No ERROR message at script level also... What I did wrong ?
Hi,   I have a XML file which i wants to load in Splunk and parse during indexing. <measType p="1">node_cpu.idle</measType> <measType p="2">node_cpu.iowait</measType> <measType p="3">node_cpu.ir... See more...
Hi,   I have a XML file which i wants to load in Splunk and parse during indexing. <measType p="1">node_cpu.idle</measType> <measType p="2">node_cpu.iowait</measType> <measType p="3">node_cpu.irq</measType> <measType p="4">node_cpu.nice</measType> <measType p="5">node_cpu.softirq</measType> <measType p="6">node_cpu.steal</measType> <measType p="7">node_cpu.system</measType> <measType p="8">node_cpu.user</measType> <measValue measObjLdn="host=xxxxxxxx-auxiliary-0,platform=1,cpu=0,instance=xxxxxxx-auxiliary-0:8001,job=node"> <r p="1">4366091.08</r> <r p="2">27479.68</r> <r p="3">0</r> <r p="4">479</r> <r p="5">18939.5</r> <r p="6">1157.43</r> <r p="7">73459.64</r> <r p="8">64291.39</r> </measValue> This has the measType p1.....p8 and value in measValue  p1.....p8. I would like to create field like  node_cpu.idle=4366091.08 node_cpu.iowait=27479.68 and so on.   Could you please guide how I can achieve this using props.conf anf transforms.conf settings 
I need to fetch the 'sid' value from the below JSON.   For that I prepared the below query, but it is not working.  |rex field=_raw "sid\":\"(?P<sid>.[^\"\,\"]*)" |stats count by sid { "sig_d... See more...
I need to fetch the 'sid' value from the below JSON.   For that I prepared the below query, but it is not working.  |rex field=_raw "sid\":\"(?P<sid>.[^\"\,\"]*)" |stats count by sid { "sig_date": "11-SEP-2020 ", "exe_date": "", "sent_date": "13-JAN-2020", "appr_date": "06-AUG-2020", "comp_date": "08-SEP-2020", "sid": "0ac00001789", "visit": "Yes" }   Can anyone please validate this query.
wed } } }, { "S" : "12:00" } } }, "day" M" : { "close" : { "S" : "23:00" open "S" : "12:00" } } } } }, "email" : { "S" : " DHours" } } }, "email" : { "S" : " Hours" } } }] | rex "wed.\D}.\D.\D.(?<sa... See more...
wed } } }, { "S" : "12:00" } } }, "day" M" : { "close" : { "S" : "23:00" open "S" : "12:00" } } } } }, "email" : { "S" : " DHours" } } }, "email" : { "S" : " Hours" } } }] | rex "wed.\D}.\D.\D.(?<sample>.*)DHours i used above rex and gives the result like { "S" : "12:00" } } }, "day" M" : { "close" : { "S" : "23:00"open "S" : "12:00" } } } } }, "email" : { "S" : " but i need below output Result should be like  { "S" : "12:00" } } }, "day" M" : { "close" : { "S" : "23:00"open "S" : "12:00" } } } } }
Morning Community, Looking at a way to pull multiple columns into an alert Im attempting to build. In the below syntax this gives me hits on the a src IP that appears in the lookupfile which is grea... See more...
Morning Community, Looking at a way to pull multiple columns into an alert Im attempting to build. In the below syntax this gives me hits on the a src IP that appears in the lookupfile which is great however I also want to pull in extra columns associated with the hit on that IP. The lookup file contains columns labelled "tag" "info" and "comment" which would further enhance the usefulness of this alert. index=netscaler sourcetype="citrix:netscaler:syslog" citrix_netscaler_event_name=LOGIN action=success app=SSLVPN | search [| inputlookup my.csv | rename value as src | fields src ] I've tried this and returns no results. index=netscaler sourcetype="citrix:netscaler:syslog" citrix_netscaler_event_name=LOGIN action=success app=SSLVPN | search [| inputlookup my.csv | rename value as src | fields src ] | lookup my.csv info as src.info output info as src.info Any thoughts at all? This article was a little similar to what Im trying to do, except I need the extra columns data from the src IP hits from the 1st part of the alert.  https://community.splunk.com/t5/Splunk-Search/Pulling-multiple-Columns-from-an-inputlookup/m-p/424742