All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I currently have a lookup that contains two columns. Hostnames and Location.  I can use the following formula to search for squirrel in all hostnames in this lookup: "squirrel" [| inputlookup mylook... See more...
I currently have a lookup that contains two columns. Hostnames and Location.  I can use the following formula to search for squirrel in all hostnames in this lookup: "squirrel" [| inputlookup mylookup.csv | fields MY_Hostname | rename MY_Hostname as host]  Works like a dream. I've also set up an alert for this to trigger once. The issue I have is that the alert email is consolidated with all the different matches. For example: "squirrel" is found: a few times in hostname_1 twice in hostname_3 and, 17 times in hostname_8. The email that is sent contains all the "squirrel" logs for all the hosts. What I would like to do is separate out each alert by individual hostname. So, in our example, I should receive 3 email alerts. One for hostname_1 with a few records, one for hostname_3 with two records and once for hostname_8 with 17 records. Is there a way to perform a sort of for loop for the lookup so that I can simply update it instead of having to manage a bunch of alerts?
Hello,  I'm working on creating automated alerts from an email security vendor and would like for them to only include the names of files/attachments which have the "attached" disposition within a ... See more...
Hello,  I'm working on creating automated alerts from an email security vendor and would like for them to only include the names of files/attachments which have the "attached" disposition within a nested JSON structure. The example below shows what I'm talking about in a limited/trimmed capacity: messageParts: [ { contentType: image/png disposition: attached filename: example.png md5: xxyy sha256: xxyy } { contentType: text/html disposition: inline filename: text.html md5: xxyy sha256: xxyy } { contentType: text/plain disposition: inline filename: text.txt md5: xxyy sha256: xxyy } ] Essentially I'd like to pull and store the respective "filename" and hash values for when the "disposition" field is "attached" but not "inline". I know this can likely be done using something like spath or mvfind, but I'm not entirely sure how to accomplish it and it's giving me fits.  Anyone who can lend a helping hand would be handsomely rewarded with karma and many well wishes, thanks for taking the time to consider my question!  
We installed SplunkVersionControl on prem and SplunkVersionControlCloud in the Splunk Cloud.  Backup is successful but the restore will not work:  The original savedsearch "SplunkVersionControl A... See more...
We installed SplunkVersionControl on prem and SplunkVersionControlCloud in the Splunk Cloud.  Backup is successful but the restore will not work:  The original savedsearch "SplunkVersionControl Audit Query" never returns an entry.  Trying to tweak the query to get at least a result ends in different timestamps in relation to the lookup entry.    log: unable to find a time entry of time=xx matching the auditEntries list of [{list of timestamp(s) different from xx, user for auditEntries}]   I really like the idea to store my knowledge objects in git so getting the restore done is crucial for us.  BR Mike
I have looked at the join documentation, but I am getting a little lost in translation. What I am trying to accomplish, is to pull data from Index1, then join on IP address to another index to pull ... See more...
I have looked at the join documentation, but I am getting a little lost in translation. What I am trying to accomplish, is to pull data from Index1, then join on IP address to another index to pull different information and display all of it. Example:   Index=firewall dest_port=21 | stats values(dest_ip) as dest_ip values(dest_port) as dest_port sum(bytes_in) as bytes_in sum(bytes_out) as bytes_out values(app) as app values(rule) as rule by user src _time Index=edr RPort=21 RemoteIP=$dest_ip-from-first-search   The output should be a table with the following: firewall._time, firewall.src, firewall.dest_ip, edr.username, edr.username, edr.processname The issue I am running into is the IP address field is named differently between the 2 indices.  Ideally I would join on firewall.dest_ip TO edr.RemoteIP Any help would be appreciated. 
I'm working on a search that evaluates events for a specific index/sourcetype combination; the events reflect SSO information regarding user authentication success as well as applications the user ha... See more...
I'm working on a search that evaluates events for a specific index/sourcetype combination; the events reflect SSO information regarding user authentication success as well as applications the user has accessed while logged on. The search is a result of an ask to identify how many users have accessed 10 or fewer apps during their logon session.  For the user, I'm using a field called "sm_user_dn"; for the app name, I'm using "sm_agentname". My search looks like this currently:     index=foo sourcetype=bar | table sm_user_dn, sm_agentname     This is pretty basic, and shows me all the user name/app combinations that have been reported in the events.  At this point, how do I tally up the number of apps per user and only show the users which have nine or fewer apps associated with them?
a customer reports intermittent connectivity issues to the internet, a website, what have you. Our instance of Splunk captures logs from our firewalls and other network devices.  What are some searc... See more...
a customer reports intermittent connectivity issues to the internet, a website, what have you. Our instance of Splunk captures logs from our firewalls and other network devices.  What are some search strings I would use, or how would I start using Splunk to troubleshoot historical (not live) connection issues going out to a website? I know this is a broad question, but I'm just looking for some ideas on where to start. Thank you.
Hello, I'll try to explain our issue we had. We have 7 HFs and 4 Idx HF_1, HF_2, HF_3 sending TCP logs and log files to: HF_4 & HF_6 & HF_7  HF_4 sending TCP logs (Not necessarily the same data... See more...
Hello, I'll try to explain our issue we had. We have 7 HFs and 4 Idx HF_1, HF_2, HF_3 sending TCP logs and log files to: HF_4 & HF_6 & HF_7  HF_4 sending TCP logs (Not necessarily the same data) to HF_5  HF_5 send the data from HF_4 to our Indexers.   The splunkd service on HF_5 was down, what cause our HF_4 to receive errors: "TCPOutAutoLB-0 , forwarding destinations have failed" make sense. What I don't understand is why the servers HF_1\2\3 got stuck and stopped send data  also to HF_6 and HF_7.   Please help me understand this, Thank you all!   Hen      
Hello, guys. I am struggling with my search in splunk and would appreciate any help.   Currently I have search that outputs the number of results for last hour and the hour before that.   ind... See more...
Hello, guys. I am struggling with my search in splunk and would appreciate any help.   Currently I have search that outputs the number of results for last hour and the hour before that.   index="xxx" sourcetype="xxx" environment="stage" earliest=-2h@h latest=-0h@h | bin _time span=1h | stats count as mycount by _time   Now I would like to compare those two hours and create an alert only if the number of results from last hour is 100x smaller than from hour before that. Is that possible? How could I go about such conditional?
We are using Splunk Enterprise OnPrem 8.2.3.3 with the add-on version 4.1.0. Our client configured the permissions according to the documentation but the following error keeps raising:   2022-09... See more...
We are using Splunk Enterprise OnPrem 8.2.3.3 with the add-on version 4.1.0. Our client configured the permissions according to the documentation but the following error keeps raising:   2022-09-14 10:28:53,294 level=ERROR pid=20311 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=utils.py:wrapper:72 | datainput=b'O365ServicesUserCounts' start_time=1663169316 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api/__init__.py", line 109, in run return consumer.run() File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api/GraphApiConsumer.py", line 62, in run items = [endpoint.get('message_factory')(item) for item in reports.throttled_get(self._session)] File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 573, in throttled_get return self.get(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 599, in get raise O365PortalError(response) splunk_ta_o365.common.portal.O365PortalError: 403:{"error":{"code":"UnknownError","message":"{\"error\":{\"code\":\"S2SUnauthorized\",\"message\":\"Invalid permission.\"}}","innerError":{"date":"2022-09-14T15:28:38","request-id":"72bea9b3-7f84-4063-9be4-05a4d331f39d","client-request-id":"72bea9b3-7f84-4063-9be4-05a4d331f39d"}}}   Also:   2022-09-14 10:28:53,294 level=ERROR pid=20311 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api.GraphApiConsumer pos=GraphApiConsumer.py:run:74 | datainput=b'O365ServicesUserCounts' start_time=1663169316 | message="Error retrieving Graph API Messages." exception=403:{"error":{"code":"UnknownError","message":"{\"error\":{\"code\":\"S2SUnauthorized\",\"message\":\"Invalid permission.\"}}","innerError":{"date":"2022-09-14T15:28:38","request-id":"72bea9b3-7f84-4063-9be4-05a4d331f39d","client-request-id":"72bea9b3-7f84-4063-9be4-05a4d331f39d"}}}   Anyone knows what is the missing or misconfigured permission?
Hello Splunker !! XBY-123-UTB SVV-123-TBU I want extract to trim the value according Condition  :  for XBY-123-UTB I want to trim to XBY only 3 character                       for SVV-123-T... See more...
Hello Splunker !! XBY-123-UTB SVV-123-TBU I want extract to trim the value according Condition  :  for XBY-123-UTB I want to trim to XBY only 3 character                       for SVV-123-TBU I want to trim the string till to 7 character What I have tried : Column name is : Employee_number If(LIKE(Employee_number,"%SVV%"),substr(Employee_number,1,7), LIKE(ubstr(Employee_number,1,3))   But this is not working for me. Please help me in this and provide  me other approaches as well.
Hi All,   I have a requirement where i want to setup the alert to run every 10 min on friday between 8-10pm and every 10 min on sunday between 6-8am.   i tried writing the Cron for it however... See more...
Hi All,   I have a requirement where i want to setup the alert to run every 10 min on friday between 8-10pm and every 10 min on sunday between 6-8am.   i tried writing the Cron for it however it didnt work    Can you please help
Hi all, Install the Akamai SIEM Integration app on the Deployer for the SHC successfully. Installed JRE 1.8 successfully. Configured the Data Inputs "Akamai SIEM API" for Akamai Control dashboard s... See more...
Hi all, Install the Akamai SIEM Integration app on the Deployer for the SHC successfully. Installed JRE 1.8 successfully. Configured the Data Inputs "Akamai SIEM API" for Akamai Control dashboard successfully. However, the Akamai Logging Dashboard show the following error; ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 Anyone have any clues? Is this a pathing issue? Mike/deepdiver
Hello everyone, I have the following question: For use cases (anything in the Enterprise Security > content), let's say I have 5 sourcetypes.  If I create a new correlation search that I want to ... See more...
Hello everyone, I have the following question: For use cases (anything in the Enterprise Security > content), let's say I have 5 sourcetypes.  If I create a new correlation search that I want to work for these 5 sourcetypes that I have the following: index=something sourcetype=something1 OR sourcetype=something2 OR sourcetype=something3 OR sourcetype=something4 OR sourcetype=something5 That would mean that whenever a new source type is onboarded I would have to manually add it to all the correlation searches that I created or that are by default in Splunk Enterprise Security content.   How do other correlation searches work (the ones that come by default with ES) with other source types if the source types weren't specified in the query?  
Hello Logs are being collected through Cisco eStreamer. I want to convert hex of packet field to ASCII. If you know how to convert the packet field of Cisco eStreamer to ASCII, please share it.... See more...
Hello Logs are being collected through Cisco eStreamer. I want to convert hex of packet field to ASCII. If you know how to convert the packet field of Cisco eStreamer to ASCII, please share it. Thank you.
I have cluster of indexers i1, i2 and i3 and not seeing any data coming from universal forwarder f1 to custom index network. I can see index=_internal host="f1" on search head sh but nothing in netwo... See more...
I have cluster of indexers i1, i2 and i3 and not seeing any data coming from universal forwarder f1 to custom index network. I can see index=_internal host="f1" on search head sh but nothing in network index. I am filling up file random.log on f1 [ec2-user@f1 log]$ sudo /opt/splunkforwarder/bin/splunk btool inputs list monitor:///var/log/*.log [monitor:///var/log/*.log] _rcvbuf = 1572864 disabled = 0 host = $decideOnStartup index = network [ec2-user@f1 log]$ cat /var/log/random.log Success 655 Error 78 Forwarder seems connected to Indexers [ec2-user@f1 log]$ sudo tail -f /opt/splunkforwarder/var/log/splunk/splunkd.log 09-14-2022 12:59:15.389 +0000 INFO AutoLoadBalancedConnectionStrategy [2938 TcpOutEloop] - Connected to idx=10.0.7.4:9997, pset=0, reuse=0. using ACK. 09-14-2022 12:59:45.300 +0000 INFO AutoLoadBalancedConnectionStrategy [2938 TcpOutEloop] - Connected to idx=10.0.7.2:9997, pset=0, reuse=0. using ACK. ^C [ec2-user@f1 log]$ sudo /opt/splunkforwarder/bin/splunk list forward-server Active forwards: 10.0.7.2:9997 10.0.7.4:9997 Configured but inactive forwards: 10.0.7.3:9997 This is how it looks on one of indexers [ec2-user@i1 ~]$ sudo /opt/splunk/bin/splunk list index | grep network network /opt/splunk/etc/network/db /opt/splunk/etc/network/colddb /opt/splunk/etc/network/thaweddb [ec2-user@i1 ~]$ sudo ls -l /opt/splunk/etc/network/db total 4 -rw------- 1 splunk splunk 10 Sep 14 11:45 CreationTime drwx--x--- 2 splunk splunk 6 Sep 14 11:45 GlobalMetaData
Hello, I am trying to list fields I have selected into a single field to display in a dashboard. Currently trying   | eval Details = mvappend('src', 'dest')  but this only lists the values what I... See more...
Hello, I am trying to list fields I have selected into a single field to display in a dashboard. Currently trying   | eval Details = mvappend('src', 'dest')  but this only lists the values what I am trying to achieve is listing field name and value for example. src=192.168.0.1 dest=192.168.0.2 etc  etc   any help appreciated. thanks    
Greetings! The target filed is message_id and sometimes the field value comes with brackets <b8047a671f47430cb44afbf14d332c63@domain.com> and sometimes it doesn't b8047a671f47430cb44afbf14d332c63@d... See more...
Greetings! The target filed is message_id and sometimes the field value comes with brackets <b8047a671f47430cb44afbf14d332c63@domain.com> and sometimes it doesn't b8047a671f47430cb44afbf14d332c63@domain.com. I'm trying to used rex mode=sed to replace < & > with nothing (effectively removing the brackets), so that field can be later used in a deduplication process (outside Splunk).  but I can't get it to work. I tried using is rex field=message_id mode=sed "s/&lt;&gt;//g" but no substitution occurs. While rex field=message_id mode=sed "y/&lt;&gt;//g" throws an error "Error in 'rex' command: Failed to initialize sed. '&lt;&gt;' and '' are different length."   What gives?
Hello Team, Is it possible to created error report to run every 30 minutes, but mail notification will be raised only if the ERROR  events are generated 20 in last 30 minutes. Example: Index... See more...
Hello Team, Is it possible to created error report to run every 30 minutes, but mail notification will be raised only if the ERROR  events are generated 20 in last 30 minutes. Example: Index=ABC sourcetype=XYZ  "ERROR"=999 I need help to created Report like this
Is there a way in Appdynamics to know Average of App Start up time?? I am able to figure out from the sessions created, it captures the splashscreenActivity time which is of App Start up time. Is th... See more...
Is there a way in Appdynamics to know Average of App Start up time?? I am able to figure out from the sessions created, it captures the splashscreenActivity time which is of App Start up time. Is there any i can get Average time for Start Up time? If we can segregate Warm start and cold start times, it would be very good. Thanks,