All Topics

Top

All Topics

Part 1 | Getting Started with AIOps: Event Correlation Basics and Alert Storm Detection in Splunk IT Service Intelligence   WATCH NOW   You’ll learn how to leverage the Content Pack ... See more...
Part 1 | Getting Started with AIOps: Event Correlation Basics and Alert Storm Detection in Splunk IT Service Intelligence   WATCH NOW   You’ll learn how to leverage the Content Pack for Monitoring and Alerting with ITSI to quickly create and group notable events from ITSI services & 3rd party monitoring tools, and answer questions like: Is the volume of incoming alerts higher, lower, or the same as what I typically see? Which hosts, checks, KPIs, and Services are contributing to the highest volumes of alerts and episodes? During an alert storm, what types of alerts are major contributors to the sudden increase in alert volume?   Want to Learn More?  Part 2 | Diving Deeper With AIOps Learn More Getting the Most Out of Event Correlation and Alert Storm Detection in Splunk IT Service Intelligence    
Dear Community, Do you have any opinions, own experiences with Splunk new feature called Data Manager? What are the advantages and disadvantages of it compared to an add-on (i.e. Splunk Add-on for A... See more...
Dear Community, Do you have any opinions, own experiences with Splunk new feature called Data Manager? What are the advantages and disadvantages of it compared to an add-on (i.e. Splunk Add-on for AWS) to ingest your data to Splunk? As I see this Data Manager just generates CF templates and we have to pull up our machines upon them (pay for these resources to AWS) and push data to Splunk (are we paying for SVC usage - ingestion- here?), while the addon is fully managed by Splunk and is pulling the data from AWS. The addon uses Splunk resources of course (are we paying for SVC usage - ingestion - here?) to pull data until it reaches AWS API limits (if it reaches). So, I would be happy to hear your own, objective experiences about the topic. Does it worth overall to start using Data Manager? (AWS costs + Splunk ingestion costs (?)) VS. (Splunk ingestion costs (?) + API limit) What did you also consider? (Of course I tried googling this topic, but I did not find any good, objective comparison, opinions about it.) Thank you very much for your help!
Hello Splunk ES experts , My Splunkd is crashing frequently with below error in crash logs C++ exception: exception_addr=0x7ff2c2c3c620 typeinfo=0x556c38241c48, name=St9bad_alloc Exception ind... See more...
Hello Splunk ES experts , My Splunkd is crashing frequently with below error in crash logs C++ exception: exception_addr=0x7ff2c2c3c620 typeinfo=0x556c38241c48, name=St9bad_alloc Exception indicates memory allocation failure This started after ES app installation . I have checked with free command I have plenty of  memory on the system 32GB and 16 CPU                total                 used            free                   shared           buff/cache           available Mem: 32750180     1764936    19979208     17744               11006036         31040488 Swap: 4169724          0                   4169724   Below are my ulimit settings  splunk soft nofile 65535 splunk hard nofile 65535 splunk soft nproc 65535 splunk hard nproc 65535 Any suggestions please 
So I'm trying to get all events where val1+val2 are also in another event from the table. In the example below, I would need as output row 0 and row 1, because both val1 and val2 match.  Row 3 and ... See more...
So I'm trying to get all events where val1+val2 are also in another event from the table. In the example below, I would need as output row 0 and row 1, because both val1 and val2 match.  Row 3 and 4 match on val1 but not on val2, and row 1 and 2 match on val2 but not on val1, so those events should get excluded. (Also I need time column to stay as I need to do some other operations with it)   row# time val1 val2 0 YYYY-MM-DD A X 1 YYYY-MM-DD A X 2 YYYY-MM-DD B X 3 YYYY-MM-DD C Y 4 YYYY-MM-DD C X 5 YYYY-MM-DD A Z   To solve this I've been trying:         | foreach val1 [eval test=if(val1+val2=val1+val2, "same", "not")]         or         '<<FIELD>>' = '<<FIELD>>'         But I end up getting with either "not" in all cases,  or "same" in others even tho both values are not actually the same
I am trying to configure the Splunk Add on for Microsoft Azure (version 4.0.2 on a stand alone Heavy Forwarder running version 9.0.1 of splunk, os RHEL 7) and I'm seeing the error below in /opt/splun... See more...
I am trying to configure the Splunk Add on for Microsoft Azure (version 4.0.2 on a stand alone Heavy Forwarder running version 9.0.1 of splunk, os RHEL 7) and I'm seeing the error below in /opt/splunk/var/log/splunk/ta_ms_aad_MS_AAD_audit.log.       2022-09-14 11:41:41,871 ERROR pid=12784 tid=MainThread file=base_modinput.py:log_error:316 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 140, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/MS_AAD_audit.py", line 168, in collect_events response = azutils.get_items_batch_session(helper=helper, url=url, session=session) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/utils.py", line 119, in get_items_batch_session raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/utils.py", line 115, in get_items_batch_session r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS-AAD/lib/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://graph.microsoft.com/None/auditLogs/directoryAudits?$orderby=activityDateTime&$filter=activityDateTime+gt+2021-10-01T14:26:12.017133Z+and+activityDateTime+le+2022-09-14T16:34:41.623739Z       On the Azure (Government) side we have the permissions below enabled: AuditLog.Read.All Device.Read.All Directory.Read.All Group.Read.All GroupMember.ReadWrite.All IdentityRiskEvent.Read.All Policy.Read.All Policy.Read.ConditionalAccess Policy.ReadWrite.ConditionalAccess SecurityEvents.Read.All User.Read User.Read.All Also, we have a P2 license so that should not be the issue. We have a python script that is able to retrieve signins from Azure using the same credentials we are using for the Splunk Add on for Microsoft Azure. Another thing I noticed is the url in the error message seem wrong. Seems like it should be: https://graph.microsoft.com/v1.0/auditLogs/directoryAudits$orderby=activityDateTime&$filter=activityDateTime+gt+2021-10-01T14:26:12.017133Z+and+activityDateTime+le+2022-09-14T16:34:41.623739Z   A couple of other tidbits. The app works for our commercial tenant. Our government tenant is new and at this point doesn't have any subscriptions. Does anyone know if having more than zero subscriptions is a requirement for this app?
I currently have a lookup that contains two columns. Hostnames and Location.  I can use the following formula to search for squirrel in all hostnames in this lookup: "squirrel" [| inputlookup mylook... See more...
I currently have a lookup that contains two columns. Hostnames and Location.  I can use the following formula to search for squirrel in all hostnames in this lookup: "squirrel" [| inputlookup mylookup.csv | fields MY_Hostname | rename MY_Hostname as host] What I would like to do is to set up an alert where for each hostname in My_Hostname, Splunk will look for "squirrel". If the Number of Results found is equal to 0 (meaning that the squirrel log was not created) in a 24 hour period, I would like an email sent out with that hostname in the email. I know I can set it up with all hostnames from the lookup, but the issue I see is that if hostname_1 has "squirrel" and hostname_4 does not, it will be greater than 0. I effectively want to know if an application is not running and which host it is not running on. The application will generate "squirrel" at least once in a 24 hour period. (If you don't like squirrels, you can insert your animal of choice here).
I currently have a lookup that contains two columns. Hostnames and Location.  I can use the following formula to search for squirrel in all hostnames in this lookup: "squirrel" [| inputlookup mylook... See more...
I currently have a lookup that contains two columns. Hostnames and Location.  I can use the following formula to search for squirrel in all hostnames in this lookup: "squirrel" [| inputlookup mylookup.csv | fields MY_Hostname | rename MY_Hostname as host]  Works like a dream. I've also set up an alert for this to trigger once. The issue I have is that the alert email is consolidated with all the different matches. For example: "squirrel" is found: a few times in hostname_1 twice in hostname_3 and, 17 times in hostname_8. The email that is sent contains all the "squirrel" logs for all the hosts. What I would like to do is separate out each alert by individual hostname. So, in our example, I should receive 3 email alerts. One for hostname_1 with a few records, one for hostname_3 with two records and once for hostname_8 with 17 records. Is there a way to perform a sort of for loop for the lookup so that I can simply update it instead of having to manage a bunch of alerts?
Hello,  I'm working on creating automated alerts from an email security vendor and would like for them to only include the names of files/attachments which have the "attached" disposition within a ... See more...
Hello,  I'm working on creating automated alerts from an email security vendor and would like for them to only include the names of files/attachments which have the "attached" disposition within a nested JSON structure. The example below shows what I'm talking about in a limited/trimmed capacity: messageParts: [ { contentType: image/png disposition: attached filename: example.png md5: xxyy sha256: xxyy } { contentType: text/html disposition: inline filename: text.html md5: xxyy sha256: xxyy } { contentType: text/plain disposition: inline filename: text.txt md5: xxyy sha256: xxyy } ] Essentially I'd like to pull and store the respective "filename" and hash values for when the "disposition" field is "attached" but not "inline". I know this can likely be done using something like spath or mvfind, but I'm not entirely sure how to accomplish it and it's giving me fits.  Anyone who can lend a helping hand would be handsomely rewarded with karma and many well wishes, thanks for taking the time to consider my question!  
We installed SplunkVersionControl on prem and SplunkVersionControlCloud in the Splunk Cloud.  Backup is successful but the restore will not work:  The original savedsearch "SplunkVersionControl A... See more...
We installed SplunkVersionControl on prem and SplunkVersionControlCloud in the Splunk Cloud.  Backup is successful but the restore will not work:  The original savedsearch "SplunkVersionControl Audit Query" never returns an entry.  Trying to tweak the query to get at least a result ends in different timestamps in relation to the lookup entry.    log: unable to find a time entry of time=xx matching the auditEntries list of [{list of timestamp(s) different from xx, user for auditEntries}]   I really like the idea to store my knowledge objects in git so getting the restore done is crucial for us.  BR Mike
I have looked at the join documentation, but I am getting a little lost in translation. What I am trying to accomplish, is to pull data from Index1, then join on IP address to another index to pull ... See more...
I have looked at the join documentation, but I am getting a little lost in translation. What I am trying to accomplish, is to pull data from Index1, then join on IP address to another index to pull different information and display all of it. Example:   Index=firewall dest_port=21 | stats values(dest_ip) as dest_ip values(dest_port) as dest_port sum(bytes_in) as bytes_in sum(bytes_out) as bytes_out values(app) as app values(rule) as rule by user src _time Index=edr RPort=21 RemoteIP=$dest_ip-from-first-search   The output should be a table with the following: firewall._time, firewall.src, firewall.dest_ip, edr.username, edr.username, edr.processname The issue I am running into is the IP address field is named differently between the 2 indices.  Ideally I would join on firewall.dest_ip TO edr.RemoteIP Any help would be appreciated. 
WATCH NOW  Join us to learn how the admin console can save you time and give you more control over the Splunk® Cloud experience. Whether you want to monitor the status and availability of... See more...
WATCH NOW  Join us to learn how the admin console can save you time and give you more control over the Splunk® Cloud experience. Whether you want to monitor the status and availability of your Splunk deployments, manage Splunk Cloud upgrades and maintenance, or tailor Splunk communication to your preferences, you can do it all with the admin console. Replay   the Platform Edition Tech Talk on demand.
I'm working on a search that evaluates events for a specific index/sourcetype combination; the events reflect SSO information regarding user authentication success as well as applications the user ha... See more...
I'm working on a search that evaluates events for a specific index/sourcetype combination; the events reflect SSO information regarding user authentication success as well as applications the user has accessed while logged on. The search is a result of an ask to identify how many users have accessed 10 or fewer apps during their logon session.  For the user, I'm using a field called "sm_user_dn"; for the app name, I'm using "sm_agentname". My search looks like this currently:     index=foo sourcetype=bar | table sm_user_dn, sm_agentname     This is pretty basic, and shows me all the user name/app combinations that have been reported in the events.  At this point, how do I tally up the number of apps per user and only show the users which have nine or fewer apps associated with them?
a customer reports intermittent connectivity issues to the internet, a website, what have you. Our instance of Splunk captures logs from our firewalls and other network devices.  What are some searc... See more...
a customer reports intermittent connectivity issues to the internet, a website, what have you. Our instance of Splunk captures logs from our firewalls and other network devices.  What are some search strings I would use, or how would I start using Splunk to troubleshoot historical (not live) connection issues going out to a website? I know this is a broad question, but I'm just looking for some ideas on where to start. Thank you.
Hello, I'll try to explain our issue we had. We have 7 HFs and 4 Idx HF_1, HF_2, HF_3 sending TCP logs and log files to: HF_4 & HF_6 & HF_7  HF_4 sending TCP logs (Not necessarily the same data... See more...
Hello, I'll try to explain our issue we had. We have 7 HFs and 4 Idx HF_1, HF_2, HF_3 sending TCP logs and log files to: HF_4 & HF_6 & HF_7  HF_4 sending TCP logs (Not necessarily the same data) to HF_5  HF_5 send the data from HF_4 to our Indexers.   The splunkd service on HF_5 was down, what cause our HF_4 to receive errors: "TCPOutAutoLB-0 , forwarding destinations have failed" make sense. What I don't understand is why the servers HF_1\2\3 got stuck and stopped send data  also to HF_6 and HF_7.   Please help me understand this, Thank you all!   Hen      
Hello, guys. I am struggling with my search in splunk and would appreciate any help.   Currently I have search that outputs the number of results for last hour and the hour before that.   ind... See more...
Hello, guys. I am struggling with my search in splunk and would appreciate any help.   Currently I have search that outputs the number of results for last hour and the hour before that.   index="xxx" sourcetype="xxx" environment="stage" earliest=-2h@h latest=-0h@h | bin _time span=1h | stats count as mycount by _time   Now I would like to compare those two hours and create an alert only if the number of results from last hour is 100x smaller than from hour before that. Is that possible? How could I go about such conditional?
We are using Splunk Enterprise OnPrem 8.2.3.3 with the add-on version 4.1.0. Our client configured the permissions according to the documentation but the following error keeps raising:   2022-09... See more...
We are using Splunk Enterprise OnPrem 8.2.3.3 with the add-on version 4.1.0. Our client configured the permissions according to the documentation but the following error keeps raising:   2022-09-14 10:28:53,294 level=ERROR pid=20311 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=utils.py:wrapper:72 | datainput=b'O365ServicesUserCounts' start_time=1663169316 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api/__init__.py", line 109, in run return consumer.run() File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api/GraphApiConsumer.py", line 62, in run items = [endpoint.get('message_factory')(item) for item in reports.throttled_get(self._session)] File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 573, in throttled_get return self.get(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 599, in get raise O365PortalError(response) splunk_ta_o365.common.portal.O365PortalError: 403:{"error":{"code":"UnknownError","message":"{\"error\":{\"code\":\"S2SUnauthorized\",\"message\":\"Invalid permission.\"}}","innerError":{"date":"2022-09-14T15:28:38","request-id":"72bea9b3-7f84-4063-9be4-05a4d331f39d","client-request-id":"72bea9b3-7f84-4063-9be4-05a4d331f39d"}}}   Also:   2022-09-14 10:28:53,294 level=ERROR pid=20311 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api.GraphApiConsumer pos=GraphApiConsumer.py:run:74 | datainput=b'O365ServicesUserCounts' start_time=1663169316 | message="Error retrieving Graph API Messages." exception=403:{"error":{"code":"UnknownError","message":"{\"error\":{\"code\":\"S2SUnauthorized\",\"message\":\"Invalid permission.\"}}","innerError":{"date":"2022-09-14T15:28:38","request-id":"72bea9b3-7f84-4063-9be4-05a4d331f39d","client-request-id":"72bea9b3-7f84-4063-9be4-05a4d331f39d"}}}   Anyone knows what is the missing or misconfigured permission?
Hello Splunker !! XBY-123-UTB SVV-123-TBU I want extract to trim the value according Condition  :  for XBY-123-UTB I want to trim to XBY only 3 character                       for SVV-123-T... See more...
Hello Splunker !! XBY-123-UTB SVV-123-TBU I want extract to trim the value according Condition  :  for XBY-123-UTB I want to trim to XBY only 3 character                       for SVV-123-TBU I want to trim the string till to 7 character What I have tried : Column name is : Employee_number If(LIKE(Employee_number,"%SVV%"),substr(Employee_number,1,7), LIKE(ubstr(Employee_number,1,3))   But this is not working for me. Please help me in this and provide  me other approaches as well.
Hi All,   I have a requirement where i want to setup the alert to run every 10 min on friday between 8-10pm and every 10 min on sunday between 6-8am.   i tried writing the Cron for it however... See more...
Hi All,   I have a requirement where i want to setup the alert to run every 10 min on friday between 8-10pm and every 10 min on sunday between 6-8am.   i tried writing the Cron for it however it didnt work    Can you please help
Hi all, Install the Akamai SIEM Integration app on the Deployer for the SHC successfully. Installed JRE 1.8 successfully. Configured the Data Inputs "Akamai SIEM API" for Akamai Control dashboard s... See more...
Hi all, Install the Akamai SIEM Integration app on the Deployer for the SHC successfully. Installed JRE 1.8 successfully. Configured the Data Inputs "Akamai SIEM API" for Akamai Control dashboard successfully. However, the Akamai Logging Dashboard show the following error; ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 Anyone have any clues? Is this a pathing issue? Mike/deepdiver