All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Have a look at this example it may help, other than that work through the documentation splunk-app-examples/custom_alert_actions/slack_alerts/default/data/ui/alerts/slack.html at master · splunk/s... See more...
Have a look at this example it may help, other than that work through the documentation splunk-app-examples/custom_alert_actions/slack_alerts/default/data/ui/alerts/slack.html at master · splunk/splunk-app-examples · GitHub  
Hello @bofasplunkguy, I am in the same predicament as yours. Did you ever find an answer to your problem?
@jaibalaraman , your searches return a consistent set of results regardless of the time zone you are in.
hi @jaibalaraman ,    You can specify an exact time such as earliest="10/5/2021:20:00:00", or a relative time such as earliest=-h or latest=@w6. When specifying relative time, you can use the now ... See more...
hi @jaibalaraman ,    You can specify an exact time such as earliest="10/5/2021:20:00:00", or a relative time such as earliest=-h or latest=@w6. When specifying relative time, you can use the now modifier to refer to the current time.
Hello there, I also want to render splunk app's dashboard on my website securely, is there any way to render splunk app's dashboard on my web site, i have successfully access an existing dashboard X... See more...
Hello there, I also want to render splunk app's dashboard on my website securely, is there any way to render splunk app's dashboard on my web site, i have successfully access an existing dashboard XML definition as per follow this guideline data/ui/views/{name}. Now my Question is how to convert Splunk app's dashboard xml to HTML so we can show that dashboard on my website. Thanks for your support.
Hi, How do Splunk ES create incidents from notable events? I'm aware that a correlaction search in Splunk ES creates a notable event in the "notable" index, but exactly how does it get from here to ... See more...
Hi, How do Splunk ES create incidents from notable events? I'm aware that a correlaction search in Splunk ES creates a notable event in the "notable" index, but exactly how does it get from here to the "Incident Review" dashboard in Splunk ES? As far as I know the incidents exists in a KV store collection, and I would then assume that there is some scheduled job that take notable events from the "notable" index, and puts them in the KV store collection. The reason I'm asking is that we are missing incidents in our "Incident Review" dashboard, but the corresponding notable events exists in the notable index. So it looks like the "notable event to incident" job has failed somehow. Is this documented somewhere in more detail?
It looks like the file has not been indexed for a few days and in addition I found the below warnings: 05-01-2024 02:18:07.646 -0400 WARN TailReader [4549 tailreader0] - Insufficient permissions to ... See more...
It looks like the file has not been indexed for a few days and in addition I found the below warnings: 05-01-2024 02:18:07.646 -0400 WARN TailReader [4549 tailreader0] - Insufficient permissions to read file='/xxx/xxx/xxx/xxx/xxx.csv' (hint: No such file or directory , UID: 0, GID: 0). How can I go about checking the permissions? Thank you.
#your base search which produce the logs, ... like index=abc sourcetype=abc index=firewall sourcetype=abc | rex field=_raw "is\s(?P<ip>.*)" | table _raw ip | stats count by ip Hi @nsiva ..  if th... See more...
#your base search which produce the logs, ... like index=abc sourcetype=abc index=firewall sourcetype=abc | rex field=_raw "is\s(?P<ip>.*)" | table _raw ip | stats count by ip Hi @nsiva ..  if this search does not work, pls show us a screenshot.. thanks.   
So try this - think of this as version 1.0, its a bit of trial and error until you get it right. Tip - Its's always good practise to place the data into a test index first, to get it all working, onc... See more...
So try this - think of this as version 1.0, its a bit of trial and error until you get it right. Tip - Its's always good practise to place the data into a test index first, to get it all working, once good move to a production index, just change the inputs.conf post testing/props dev work to prod index.    props.conf  [jlogs] TIME_PREFIX = Entry\s\d+\sstarting\sat TIME_FORMAT = %d/%m/%Y %H:%M:%S BREAK_ONLY_BEFORE =([\r\n]+)\.net MUST_BREAK_AFTER =local\stime([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 50 SHOULD_LINEMERGE=true TRUNCATE = 10000 NO_BINARY_CHECK = 1 KV_MODE = auto #Remove unwanted headers or data #This config is no longer needed - left for reference #TRANSFORMS-null = remove_unwanted_data_from_jlog   See what this looks like. 
Hey @nsiva , The query that @inventsekar has posted will work with any of the ip address provided the raw event is  123 IP Address is 1.2.3.4  Can you please elaborate why the solution doesn't wor... See more...
Hey @nsiva , The query that @inventsekar has posted will work with any of the ip address provided the raw event is  123 IP Address is 1.2.3.4  Can you please elaborate why the solution doesn't work for you?  And for your reference, I've used 4.3.2.1 in _raw and it still extracts the ip address. Find the below screenshot.   To assist you better, it would be great if you can provide the raw events and then ip field can be extracted from the same. You can redact the sensitive information.   Thanks, Tejas.
I am using query as below  index="test" sourcetype="reports" | bin _time span=1m | stats values(a) as a values(b) as b values(c) as c values(d) as d values(e) as e values(f) as f values(g) as g by ... See more...
I am using query as below  index="test" sourcetype="reports" | bin _time span=1m | stats values(a) as a values(b) as b values(c) as c values(d) as d values(e) as e values(f) as f values(g) as g by par1, _time | append [search (index="test" sourcetype=reports_metadata) | table par1,par2,par3,par4,par5,par6,par7,par8,par9,par10,par11,par12] | eventstats values(par2) as par2,values(par3) as par3, values(par4) as par4, values(par5) as par5, values(par6) as par6, values(par7) as par7, values(par8) as par8,values(par9) as par9,values(par10) as par10,values(par11) as par11,values(par12) as par12, values(a) as a alues(b) as b values(c) as c values(d) as d values(e) as e values(f) as f values(g) as g by par1 | search par2 IN ("*") par3 IN ("*") par3 IN ("*") par4 ("*") par5 IN ("*") par6 IN ("*") par7 IN ("*") par8 IN ("*") par9 IN ("*") par10 IN ("*") | search par1="*"ar2 IN ("*") par3 IN ("*") par3 IN ("*") par4 ("*") par5 IN ("*") par6 IN ("*") par7 IN ("*") par8 IN ("*") par9 IN ("*") par10 IN ("*") par11 IN ("*") par12 IN ("*") | timechart span=15m values(a) by par1 limit=0 In this query, I am able to use any values rangin from a to g and plot a time series graph. I need help in plotting time series for one or more values and also how this value can be used to pick from a drop down filter  #timeseries #timechart #xyseries #multiseries #multivalue 
When the retention period is reduced, any buckets where all data is older than the new retention period will be frozen.  When archiving is enabled, freezing buckets means moving them to archive stora... See more...
When the retention period is reduced, any buckets where all data is older than the new retention period will be frozen.  When archiving is enabled, freezing buckets means moving them to archive storage.
Hello @kkarthik2 , Please consider migrating from Splunk app for AWS to the AWS Content Pack along with ITEW/ITSI. ITE Works is a free application that helps integrate and help visualize different A... See more...
Hello @kkarthik2 , Please consider migrating from Splunk app for AWS to the AWS Content Pack along with ITEW/ITSI. ITE Works is a free application that helps integrate and help visualize different AWS app dashboards. As per Splunkbase, the Splunk App for AWS is archived and has reached EOL. Following is the documentation link for migrating from Splunk App for AWS to Content Pack for AWS - https://docs.splunk.com/Documentation/CPAWSDash/1.4.0/CP/Migrate Also, the reason you see so much warnings is because the old app references to custom JS which is now unsupported by Splunk and hence such visualization will not come into effect on the dashboard.   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.
@inventsekar This works only for the ip address 1.2.3.4. What do I do if the ip address changes to 5.6.7.8 or 4.3.2.1? 
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard... See more...
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard.  While seeing the  developer console, below error shwoing. How to resume back the dashboards of Splunk App for AWS.  Current version of Splunk App AWS 6.0.2.      
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard... See more...
After upgrade 9.2.0. splunk app for AWS  got the same error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.".  Unable to open dashboard.  While seeing the  developer console, below error shwoing. How to bring it back.     
I want a scheduled task to run the query and save it twice a day, every day.
Hi @m92, you can schedule the runs of your alert twice in a day using cron: 0 12,19 * * * the question is: do you want the same time period (e.g. 24 hours) on bothe the searches? Ciao. Giuseppe
Thanks! This help me to move forward, just one thing if you can help. I have all done all, just not sure on what should i be putting on html (https://dev.splunk.com/enterprise/docs/devtools/customale... See more...
Thanks! This help me to move forward, just one thing if you can help. I have all done all, just not sure on what should i be putting on html (https://dev.splunk.com/enterprise/docs/devtools/customalertactions/createuicaa/) so that i can pass the IP to Akamai API.
Hello Splunkers, I'd like to schedule a query twice a day. For example, one at 12:00 PM and the other at 7:00 PM, and then receive a report of each query. This would save me from having to run the q... See more...
Hello Splunkers, I'd like to schedule a query twice a day. For example, one at 12:00 PM and the other at 7:00 PM, and then receive a report of each query. This would save me from having to run the query each time manually. Is it possible, and if so, how can I do it? The query in question is: (index="index1" Users=* IP=*) OR (index="index2" tag=1) | where NOT match(Users, "^AAA-[0-9]{5}\$") | where NOT match(Users, "^AAA[A-Z0-9]{10}\$") | eval ip=coalesce(IP, srcip) | stats dc(index) AS index_count values(Users) AS Users values(destip) AS destip values(service) AS service earliest(_time) AS earliest latest(_time) AS latest BY ip | where index_count>1 | eval earliest=strftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table Users, ip, dest_ip, service, earliest, latest Thanks in advance!