All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Nour.Alghamdi, Can you share what documentation you were looking at? I'll share it with the Docs team to see if we can get changes made to it.  In the meantime, I'm also looking around to see w... See more...
Hi @Nour.Alghamdi, Can you share what documentation you were looking at? I'll share it with the Docs team to see if we can get changes made to it.  In the meantime, I'm also looking around to see what I can find. 
Hi, I am having issues passing value into savedsearch Below is the simplified version of my query: | inputlookup alert_thresholds.csv | search Alert="HTTP 500" | stats values(Critical) as Cr... See more...
Hi, I am having issues passing value into savedsearch Below is the simplified version of my query: | inputlookup alert_thresholds.csv | search Alert="HTTP 500" | stats values(Critical) as Critical | appendcols [| savedsearch "Events_list" perc=Critical] basically what I want to do is to use Critical value as the value of perc in subsearch but it seems to not work correctly. I get no results. When I replace Critical with 10 in the subsearch it works just fine.
Hi @Yogesh.Joshi, Did you ever hear from anyone from AppD when you had those questions a few months ago? 
we have a log ingestion from aws cloud env via HTTP event collector to splunk , one of the user reporting some of the logs which is missing in splunk is there any log file to validate this or if ther... See more...
we have a log ingestion from aws cloud env via HTTP event collector to splunk , one of the user reporting some of the logs which is missing in splunk is there any log file to validate this or if there is any connectivity drop in http to cloud apps how to validate this 
Hi @Harikiran.Kanuru, You can find info here on email template and actions: https://docs.appdynamics.com/appd/onprem/latest/en/appdynamics-essentials/alert-and-respond/actions/notification-actions
Hello @Ankur.Sharma @Evgeniy.Ziangirov, The latest version of the iOS agent, which came out in early December now supports Alamofire. Sorry for getting back to everyone so late.  https://docs.... See more...
Hello @Ankur.Sharma @Evgeniy.Ziangirov, The latest version of the iOS agent, which came out in early December now supports Alamofire. Sorry for getting back to everyone so late.  https://docs.appdynamics.com/appd/23.x/23.12/en/product-and-release-announcements/release-notes#id-.ReleaseNotesv23.12-agent-enhancements-23-12AgentEnhancements
Events are merging like this: 2022-02-02T15:26:46.593150-05:00 mycompany: syslog initialised2022-02-02T15:26:48.970328-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting2022-02-02T1... See more...
Events are merging like this: 2022-02-02T15:26:46.593150-05:00 mycompany: syslog initialised2022-02-02T15:26:48.970328-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting2022-02-02T15:26:50.032387-05:00 mycompany: [Portal|SYSTEM|20002|*system] Portal is up and running2022-02-02T15:26:50.488943-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=5fdc6ec-01f0-41d5-8a33-d58b5efre2022-02-02T15:26:50.496126-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=6fe48c-20ee-4f7b-bf88-22ed5dfdd2022-02-02T15:26:50.502563-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=bcd5c461-9d23-4c79-8509-4af76c03ff5a2022-02-02T15:26:50.505764-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=bbb9449e-2893-4d06-bc51-edfdd42022-02-02T15:26:50.512171-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=155c7a37-69bc-44d2-98ac-cb75831a7c472022-02-02T15:26:50.517049-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=a575dfde3eb-4ca6-be2d-4491a4b59fe02022-02-02T15:33:33.669982-05:00 mycompany: syslog initialised2022-02-02T15:33:40.935228-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting2022-02-02T15:33:41.990171-05:00 mycompany: [Portal|SYSTEM|20002|*system] Portal is up and running2022-02-02T15:35:34.533063-05:00 mycompany: syslog initialised2022-02-02T15:35:42.168799-05:00 mycompany: [Portal|SYSTEM|20001 I am expecting logs should break on timestamps like this: 2022-02-02T15:26:46.593150-05:00 mycompany: syslog initialised 2022-02-02T15:26:48.970328-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting 2022-02-02T15:26:50.032387-05:00 mycompany: [Portal|SYSTEM|20002|*system] Portal is up and running
I didnt quite understand what you mean, you are saying for example the input will return the OU of the domain computer and then in automation playbooks I filter it based on the value? if yes, that ki... See more...
I didnt quite understand what you mean, you are saying for example the input will return the OU of the domain computer and then in automation playbooks I filter it based on the value? if yes, that kinda defeats the purpose of input playbooks?
Why not have the input playbook act as a filter and anything that matches your requirement come as an one output and if you want, the others under another output? Then you can work out which was True... See more...
Why not have the input playbook act as a filter and anything that matches your requirement come as an one output and if you want, the others under another output? Then you can work out which was True and which was False? OR tag/update the artifact that contains the value with something to indicate the result of the check? There are many ways to do things in SOAR just depends how janky you want to get!
Ok, two things. Three actually. 1. Check out the Splunk edu site for entry level courses on splunk searching. 2. If you want to just search for logon type 2, add the condition to the initial search... See more...
Ok, two things. Three actually. 1. Check out the Splunk edu site for entry level courses on splunk searching. 2. If you want to just search for logon type 2, add the condition to the initial search (searching for particular value is much more effective than excluding a value from your search so if "not logon type 3" can be simplified to logon type 2, it's great) 3. I have honestly no idea what you mean by "add our asset to the search" (in Splunk terminology those are called searches, not queries).
For the time being I have solved the issue saving the code one piece at a time. Saving the 200 lines of code in one shot was generating the problem...   Restarting Splunk in DEBUG mode can point i... See more...
For the time being I have solved the issue saving the code one piece at a time. Saving the 200 lines of code in one shot was generating the problem...   Restarting Splunk in DEBUG mode can point in the right direction to understand the root cause, but the amount of messages is really huge.
Hello to all, really hoping I can make sense while asking this....    I'm an entry level  IT Security Specialist and I have been tasked with re-writing our current query for overnight logins as our e... See more...
Hello to all, really hoping I can make sense while asking this....    I'm an entry level  IT Security Specialist and I have been tasked with re-writing our current query for overnight logins as our existing query does not put out the correct information we need.  Here is the current query: source=WinEventLog:Security EventCode=4624 OR (EventCode=4776 Keywords="Audit Success") | eval Account = mvindex(Account_Name, 1) | eval TimeHour = Strftime(_time, "%H") | eval Source = coalesce(Source_Network_Address, Sorce_Workstation) | eval Source=if(Source="127.0.0.1" or Source="::1" OR Source="-" OR Source="", hos, Source) | where (Time_Hour > 20 AND Time_Hour <24) OR (Time_Hour > 0 AND Time_Hour < 5) | bin _time span=12h aligntime=@d+20h | eval NightOf = strftime(_time "%m/%d/%Y) | lookup dnslookup clienttip as Source OUTPUT clienthost as SourceDevice | search NOT Account="*$" NOT Account=HealthMail*" NOT Account="System" | stats count as LoginEvents values(sourceDevice) as SourceDevices by Account NightOf | sort NightOfAccount SourceDevices | table NightOf Account Source Devices LoginEvents I need to somehow add an exclusion to the query for logon type 3, (meaning for splunk to omit them from its search), as well as add our asset to the query, that way splunk will only target searches from that particular asset.   I know nothing about coding, or scripts, and my boss just thought it would be super fun if the guy with the least experience try to figure it all out since the current query does not give us the data that we need for our audits.  In a nutshell, we need splunk to tell us who was logged in between 8pm-5am, that it was a logon type 2 , and what computer system they were on.  If anyone could help out an absolute noob here I would greatly appreciate it!  
I think I found the answer on Reddit. It's in spanish tough https://www.reddit.com/user/Splunker1123/comments/198992x/splunk_y_el_esquema_nacional_de_seguridad_ens/?utm_source=share&utm_medium=web2x... See more...
I think I found the answer on Reddit. It's in spanish tough https://www.reddit.com/user/Splunker1123/comments/198992x/splunk_y_el_esquema_nacional_de_seguridad_ens/?utm_source=share&utm_medium=web2x&context=3    
The CIM manual should help.  It describes each DM field so you can determine which of the fields in your data map best.  See https://docs.splunk.com/Documentation/CIM/5.3.1/User/Endpoint#Processes
custom code block? we are talking about an app action it may vary and a custom code block is not suitable here without further interactions -_- also what if there are 10 different paths, filter/decis... See more...
custom code block? we are talking about an app action it may vary and a custom code block is not suitable here without further interactions -_- also what if there are 10 different paths, filter/decision simply should result in what path was derived from the condition... just like in ansible
I understand now. If the only thing you need to do is evaluate whether the user exists or not, and there are no actions you need to take down either branch, I'd say a simple custom code block is the ... See more...
I understand now. If the only thing you need to do is evaluate whether the user exists or not, and there are no actions you need to take down either branch, I'd say a simple custom code block is the way to go. Filter and decision blocks are more useful for deciding a path for the playbook to continue down. Something along the lines of if user exists: output_variable = True else: output_variable = False  
If you're using the Box Plot viz at https://splunkbase.splunk.com/app/3157 then it's an archived app that probably is outdated and may have compatibility issues.  Consider trying the box plot viz ava... See more...
If you're using the Box Plot viz at https://splunkbase.splunk.com/app/3157 then it's an archived app that probably is outdated and may have compatibility issues.  Consider trying the box plot viz available in https://splunkbase.splunk.com/app/5730. That said, does the time field have values greater than 20?  If so, the lack of a default condition will cause the case function will set total_time to null, which might generate the "trace 0" graph points. | eval total_time=case(time<= 8, "8", time<= 9, "8~9", time<= 10, "9~10", time<= 11, "10~11", time<= 15, "11~15", time<= 20, "15~20", 1==1, ">20")  
So the Report Completed message occurs before the Report Started message? Assuming it is actually the latest (by _time) that you want to keep, try something like this index="index" sourcetype=host=... See more...
So the Report Completed message occurs before the Report Started message? Assuming it is actually the latest (by _time) that you want to keep, try something like this index="index" sourcetype=host=hq " Mark transaction results" "port = 2022" | rex "client\s'(?<client>[^']*)'" | rex "transaction\s'(?<transaction>[^']*)'" | rex "user\s'(?<user>[^']*)'" | rex "(?<user_transaction>\S+)\sReport Finished successfully" | eval user_transaction = if(isnull(user_transaction), client . "-" . user . "-" . transaction, user_transaction) | stats latest(_raw) as _raw by user_transaction  
Hi All,    I have particular issue when getting data from kv store is working fine. But saving anything using  helper.save_check_point  is failling. Also added logs and found that this issue is o... See more...
Hi All,    I have particular issue when getting data from kv store is working fine. But saving anything using  helper.save_check_point  is failling. Also added logs and found that this issue is only for  batch_save post API which splunk uses internaly and error I get is                  File "/opt/splunk/lib/python3.7/http/client.py", line 1373, in getresponse response.begin() File "/opt/splunk/lib/python3.7/http/client.py", line 319, in begin version, status, reason = self._read_status() File "/opt/splunk/lib/python3.7/http/client.py", line 288, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response                
  Apologies here are events   Event 1: 2024-01-17 09:35:10.3370 [44] INFO[.java..TransLogCallback] Starting Report for client 'OBI96' user 'auto' for transaction '4826143 '' Report ID '222' - Re... See more...
  Apologies here are events   Event 1: 2024-01-17 09:35:10.3370 [44] INFO[.java..TransLogCallback] Starting Report for client 'OBI96' user 'auto' for transaction '4826143 '' Report ID '222' - Retry #1 Date : 1/17/2024 Time : 9:35:10 AM Message : Mark transaction results: 1, Query : UPDATE transactions SET queued = 0, processing = 1, serviceip = ? , timestarted = now() WHERE clientcode = ? AND username = ? AND transid = ? (100.00.000.00, OBI96, auto, 4826143 ), port = 2222^^-------------------------------------------------------------------^^   Event 2:   2024-01-17 08:41:35.9174 [94] INFO  [.java..TransLogCallback] OBI96-auto-4826143 Report Finished successfully at 8:41:35 AM on 1/17/2024 ^^-----------------------------------------------