All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,   I have a requirement where i want to setup the alert to run every 10 min on friday between 8-10pm and every 10 min on sunday between 6-8am.   i tried writing the Cron for it however... See more...
Hi All,   I have a requirement where i want to setup the alert to run every 10 min on friday between 8-10pm and every 10 min on sunday between 6-8am.   i tried writing the Cron for it however it didnt work    Can you please help
Hi all, Install the Akamai SIEM Integration app on the Deployer for the SHC successfully. Installed JRE 1.8 successfully. Configured the Data Inputs "Akamai SIEM API" for Akamai Control dashboard s... See more...
Hi all, Install the Akamai SIEM Integration app on the Deployer for the SHC successfully. Installed JRE 1.8 successfully. Configured the Data Inputs "Akamai SIEM API" for Akamai Control dashboard successfully. However, the Akamai Logging Dashboard show the following error; ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 Anyone have any clues? Is this a pathing issue? Mike/deepdiver
Hello everyone, I have the following question: For use cases (anything in the Enterprise Security > content), let's say I have 5 sourcetypes.  If I create a new correlation search that I want to ... See more...
Hello everyone, I have the following question: For use cases (anything in the Enterprise Security > content), let's say I have 5 sourcetypes.  If I create a new correlation search that I want to work for these 5 sourcetypes that I have the following: index=something sourcetype=something1 OR sourcetype=something2 OR sourcetype=something3 OR sourcetype=something4 OR sourcetype=something5 That would mean that whenever a new source type is onboarded I would have to manually add it to all the correlation searches that I created or that are by default in Splunk Enterprise Security content.   How do other correlation searches work (the ones that come by default with ES) with other source types if the source types weren't specified in the query?  
Hello Logs are being collected through Cisco eStreamer. I want to convert hex of packet field to ASCII. If you know how to convert the packet field of Cisco eStreamer to ASCII, please share it.... See more...
Hello Logs are being collected through Cisco eStreamer. I want to convert hex of packet field to ASCII. If you know how to convert the packet field of Cisco eStreamer to ASCII, please share it. Thank you.
I have cluster of indexers i1, i2 and i3 and not seeing any data coming from universal forwarder f1 to custom index network. I can see index=_internal host="f1" on search head sh but nothing in netwo... See more...
I have cluster of indexers i1, i2 and i3 and not seeing any data coming from universal forwarder f1 to custom index network. I can see index=_internal host="f1" on search head sh but nothing in network index. I am filling up file random.log on f1 [ec2-user@f1 log]$ sudo /opt/splunkforwarder/bin/splunk btool inputs list monitor:///var/log/*.log [monitor:///var/log/*.log] _rcvbuf = 1572864 disabled = 0 host = $decideOnStartup index = network [ec2-user@f1 log]$ cat /var/log/random.log Success 655 Error 78 Forwarder seems connected to Indexers [ec2-user@f1 log]$ sudo tail -f /opt/splunkforwarder/var/log/splunk/splunkd.log 09-14-2022 12:59:15.389 +0000 INFO AutoLoadBalancedConnectionStrategy [2938 TcpOutEloop] - Connected to idx=10.0.7.4:9997, pset=0, reuse=0. using ACK. 09-14-2022 12:59:45.300 +0000 INFO AutoLoadBalancedConnectionStrategy [2938 TcpOutEloop] - Connected to idx=10.0.7.2:9997, pset=0, reuse=0. using ACK. ^C [ec2-user@f1 log]$ sudo /opt/splunkforwarder/bin/splunk list forward-server Active forwards: 10.0.7.2:9997 10.0.7.4:9997 Configured but inactive forwards: 10.0.7.3:9997 This is how it looks on one of indexers [ec2-user@i1 ~]$ sudo /opt/splunk/bin/splunk list index | grep network network /opt/splunk/etc/network/db /opt/splunk/etc/network/colddb /opt/splunk/etc/network/thaweddb [ec2-user@i1 ~]$ sudo ls -l /opt/splunk/etc/network/db total 4 -rw------- 1 splunk splunk 10 Sep 14 11:45 CreationTime drwx--x--- 2 splunk splunk 6 Sep 14 11:45 GlobalMetaData
Hello, I am trying to list fields I have selected into a single field to display in a dashboard. Currently trying   | eval Details = mvappend('src', 'dest')  but this only lists the values what I... See more...
Hello, I am trying to list fields I have selected into a single field to display in a dashboard. Currently trying   | eval Details = mvappend('src', 'dest')  but this only lists the values what I am trying to achieve is listing field name and value for example. src=192.168.0.1 dest=192.168.0.2 etc  etc   any help appreciated. thanks    
Greetings! The target filed is message_id and sometimes the field value comes with brackets <b8047a671f47430cb44afbf14d332c63@domain.com> and sometimes it doesn't b8047a671f47430cb44afbf14d332c63@d... See more...
Greetings! The target filed is message_id and sometimes the field value comes with brackets <b8047a671f47430cb44afbf14d332c63@domain.com> and sometimes it doesn't b8047a671f47430cb44afbf14d332c63@domain.com. I'm trying to used rex mode=sed to replace < & > with nothing (effectively removing the brackets), so that field can be later used in a deduplication process (outside Splunk).  but I can't get it to work. I tried using is rex field=message_id mode=sed "s/&lt;&gt;//g" but no substitution occurs. While rex field=message_id mode=sed "y/&lt;&gt;//g" throws an error "Error in 'rex' command: Failed to initialize sed. '&lt;&gt;' and '' are different length."   What gives?
Hello Team, Is it possible to created error report to run every 30 minutes, but mail notification will be raised only if the ERROR  events are generated 20 in last 30 minutes. Example: Index... See more...
Hello Team, Is it possible to created error report to run every 30 minutes, but mail notification will be raised only if the ERROR  events are generated 20 in last 30 minutes. Example: Index=ABC sourcetype=XYZ  "ERROR"=999 I need help to created Report like this
Is there a way in Appdynamics to know Average of App Start up time?? I am able to figure out from the sessions created, it captures the splashscreenActivity time which is of App Start up time. Is th... See more...
Is there a way in Appdynamics to know Average of App Start up time?? I am able to figure out from the sessions created, it captures the splashscreenActivity time which is of App Start up time. Is there any i can get Average time for Start Up time? If we can segregate Warm start and cold start times, it would be very good. Thanks,
Is It possible in splunk ITSI Maintenance window can we add cron schedule ?
Hello, I want the zoom to be replicated in all graphs generated with `Use Trellis Layout´.There are 10 timechart.. is there a way to do it using tokens? I have tried to do it through `Manage tok... See more...
Hello, I want the zoom to be replicated in all graphs generated with `Use Trellis Layout´.There are 10 timechart.. is there a way to do it using tokens? I have tried to do it through `Manage tokens on this dashboard´but I am not able. Has anyone got it??
Hello, we are using Splunk App for Salesforce in the Splunk Cloud environment. We noticed that the App through the saved search "Lookup - ACCOUNT_ID TO ACCOUNT_NAME" every week at midnight create... See more...
Hello, we are using Splunk App for Salesforce in the Splunk Cloud environment. We noticed that the App through the saved search "Lookup - ACCOUNT_ID TO ACCOUNT_NAME" every week at midnight creates a CSV file called "lookup_sfdc_accounts.csv", which in our case is populated with over 4 million lines and consequently the size of the files are nearly 500MB. The problem is that due to the size of this lookup, Splunk Cloud cannot replicate the bundle and the following message appears: The current bundle directory contains a large lookup file that might cause bundle replication fail. The path to the directory is [...]. We do not have the ability to filter events and reduce the size of the lookup. Has anyone been in the same situation? Is it possible to solve somehow, for example by migrating the lookup to KV Store? Any suggestions? The App is not directly supported by Splunk and I cannot find the developer's contacts to submit the case.
Tried accessing an API using bearer tokens TA-Webtools but I am getting SSL error as shown below. I tried verifyssl=false still I am getting the same error. Please help me solve this   ... See more...
Tried accessing an API using bearer tokens TA-Webtools but I am getting SSL error as shown below. I tried verifyssl=false still I am getting the same error. Please help me solve this   @jkat54 
Splunk cloud support unable to upgrade App #3720 (TA-MS_O365_Reporting) to version 2.0 due to badly package issue. Microsoft is going to retire the Legacy protocol<https://techcommunity.microsoft.com... See more...
Splunk cloud support unable to upgrade App #3720 (TA-MS_O365_Reporting) to version 2.0 due to badly package issue. Microsoft is going to retire the Legacy protocol<https://techcommunity.microsoft.com/t5/exchange-team-blog/basic-authentication-and-exchange-online-september-2021-update/ba-p/2772210> irrespective of its usage by 1st October 2022. By updating to version 2.0, then we are able to use OAuth instead.  Can app developer will get this addressed in the current add-on?
I push the logs to splunk using hec  method  using this end point "/services/collector" that index data showing in 1 MB in index manger but im search through the index the events are always showing "... See more...
I push the logs to splunk using hec  method  using this end point "/services/collector" that index data showing in 1 MB in index manger but im search through the index the events are always showing "0". only default configtracker events are showing.
Hi , My Job completes at 4AM,I need to set up a alert to monitor the job status 2 hours before the job completion time i.e. at 2 AM i should start checking the Job Status if it completed or not,So ... See more...
Hi , My Job completes at 4AM,I need to set up a alert to monitor the job status 2 hours before the job completion time i.e. at 2 AM i should start checking the Job Status if it completed or not,So starting from 2AM i should monitor and trigger the alert till the job is completed. I am using below query but it doesn't make sense and doesn't satisfies  my above condition. | makeresults | eval CurrentTime="05:00:00" | eval CurrentTimepoch=strptime(CurrentTime,"%H:%M:%S") | eval SLATIME="04:00:00" | eval SLATIMEepoch=strptime(SLATIME,"%H:%M:%S") | eval Diff=(SLATIMEepoch-CurrentTimepoch) | eval Duration=if(Diff<0, "-", "") + tostring(abs(Diff), "duration") | eval check1=case(Duration>="02:00:00" AND STATUS!=C,"Trigger",1=1,"Dont") Please help me how to capture specific time i.e. 2 AM and start checking the job status in the query?
Hi to all. I'm working at a startup company providing security solutions. I started research on how to integrate with Splunk, Splunk ES. for now, we choose to use the HEC method for delivering th... See more...
Hi to all. I'm working at a startup company providing security solutions. I started research on how to integrate with Splunk, Splunk ES. for now, we choose to use the HEC method for delivering the data into Splunk cloud. I wanted to ask some questions.  do i need to create an add-on?  to integrate with Splunk SE what are the actions, I need to do? I understand this is the flow of actions -  load data using the HEC, parse data normalizing them, eventually, load data in Data Models, if you don't load data In data Models, create your Correlation Searches using indexes. I'll  be happy if someone will be able to elaborate more about each topic and tell me if something is missing.    
What are the unique features in splunk compare to other tool ?    
I am using HEC to push the data to Splunk, and in the HEC we have a field Source, And the log which I am forwarding to Splunk too have a field name Source.  The issue I am facing is, that both the ... See more...
I am using HEC to push the data to Splunk, and in the HEC we have a field Source, And the log which I am forwarding to Splunk too have a field name Source.  The issue I am facing is, that both the source name gets merged and on each log, I can see the same, two values for the source. I don't want to change the field of my log, Is there a way I can change something on HEC?
Hi All, I have created a custom event which gives me data about the top running sqls. However, when I create an alert on it, it only gives me header information and not the event details. Can you pl... See more...
Hi All, I have created a custom event which gives me data about the top running sqls. However, when I create an alert on it, it only gives me header information and not the event details. Can you please help me understand how to get event details in email. Thanks.