All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi i'm trying to capture 2 fields, the first part of this word (LON) and the remaining (RTI2_SND.TRACE) within the same regular expression LON_RTI2_SND.TRACE Thanks
Hey, i want a regex result from 10.66.189.62 -- -- -[17/May/2022:05:59:16--0400]--502- "POST /astra/sliceHTTP/1.1" req_len=1776-req_cont_len=117-req_cont_enc="-"-res_body_len=341 res_len=733 "htt... See more...
Hey, i want a regex result from 10.66.189.62 -- -- -[17/May/2022:05:59:16--0400]--502- "POST /astra/sliceHTTP/1.1" req_len=1776-req_cont_len=117-req_cont_enc="-"-res_body_len=341 res_len=733 "https://ninepoint.blackrock.com/astra/ ". "Mozilla/5.0- (Macintosh; Intel-Mac-OS-X-10_15_7) -AppleWebKit/537.36-(KHTML,-Like-Gecko) Chrome/10.0.4896.127 Safari/537.36" x_fw_for="-".req_time=278.326-ups_res_time=278.326 ups_con_time=0.011-ups_status=502-pipe=. -VNDRegID=undefined- as; ninepoint Could you please help me with my query? thank you
I have the following _raw field in my index: _raw Response Headers: {'Date': 'Fri, 13 May 2022 02:59:34 GMT', 'Content-Type': 'application/json; c... See more...
I have the following _raw field in my index: _raw Response Headers: {'Date': 'Fri, 13 May 2022 02:59:34 GMT', 'Content-Type': 'application/json; charset=utf-8'} So, I realized ' = '. But there is no way to convert that string into a human readable string, like this: Response Headers: {'Date': 'Fri, 13 May 2022 02:59:34 GMT', 'Content-Type': 'application/json; charset=utf-8'} I tried with something like this, without sucess:   | eval myfield = replace(tostring(_raw),"x27","'")     Then I checked if the string contains "x27" and turns out it is not being detected:   | eval exists=if(like(tostring(_raw), "%x27%"), "YES", "NO")   Is there a way to convert that weird string into a human readable string?  
How will we renew SAML authentication credentials on Splunk?
I have a new UF installation and wish to register it with an existing Deployment Server. When I run the command: $SPLUNK_HOME/bin/splunk set deploy-poll <FQDN of DS>:<management port> I am prompte... See more...
I have a new UF installation and wish to register it with an existing Deployment Server. When I run the command: $SPLUNK_HOME/bin/splunk set deploy-poll <FQDN of DS>:<management port> I am prompted to login, and provide it with credentials that I am able to log into the DS Web interface. It gives me "Login failed" though.  How can I diagnose this further?
Hello there, The deal is that I have 2 forwarders that have exactly the same logs (I'm using 2 forwarders not to have a SPOF) and I want to find a solution to not have duplicated logs. I thought of... See more...
Hello there, The deal is that I have 2 forwarders that have exactly the same logs (I'm using 2 forwarders not to have a SPOF) and I want to find a solution to not have duplicated logs. I thought of using a load balancer but I just want to know first if there is some config on Splunk that allows to do that please. Best regards,  Abir
Hello Splunkers, @SPL , Was working on some of the development activity, got stuck at some level. We have a scenario where I need to check , on a single day which a user had done  transactions for ... See more...
Hello Splunkers, @SPL , Was working on some of the development activity, got stuck at some level. We have a scenario where I need to check , on a single day which a user had done  transactions for more than 3 different vendors Date User ID Vendor Transactions 10/5/2021 user 1 SAAS (User1) $$$$$ 10/5/2021 user 2 PAAS (User1) $$$$$ 10/7/2021 user 3 IAAS $$$$$ 10/8/2021 user 4 AAA $$$$$ 10/9/2021 user 5 CCCC $$$$$ 10/10/2021 user 6 FFFF $$$$$ 10/5/2021 user 7 XXXX (User1) $$$$$ 10/6/2021 user 8 ZZZZ $$$$$ 10/8/2021 user 9 EEE $$$$$ 10/9/2021 user 10 QQQQ $$$$$ 10/10/2021 user 11 SSSS $$$$$ 10/11/2021 user 12 PPPP $$$$$ 10/12/2021 user 13 WWW $$$$$
In our Splunk environment, we currently ingest Azure AD logs and we have three different sourcetypes: azure:aad:signin azure:aad:audit azure:aad:user There no missing events and the ingested d... See more...
In our Splunk environment, we currently ingest Azure AD logs and we have three different sourcetypes: azure:aad:signin azure:aad:audit azure:aad:user There no missing events and the ingested data is very rich. However, I don't see any way within the Splunk ingested Azure signin data to to filter by authentication method (Single-factor vs multi-factor). This is something that can be done via Azure Active Directory, Monitoring, Sign-in logs but I do not see any reference to it in my Splunk data (I do see a lot of conditional access enforcement and the other primary fields, but not any of the secondary fields that could be used for filtering in Azure):  
Hi Team, Our vendor need MIB files from our splunk heavy forwarder (Linux)  for monitoring purpose .. How can we get that ? can someone please provide steps ? we are using SNMP V2. Thanks in ... See more...
Hi Team, Our vendor need MIB files from our splunk heavy forwarder (Linux)  for monitoring purpose .. How can we get that ? can someone please provide steps ? we are using SNMP V2. Thanks in advance .  
I am working on a partner integration project using Splunk Security Essentials (SSE) with my custom security content. Locally, I have the security use cases in JSON format that SSE accepts, but I wa... See more...
I am working on a partner integration project using Splunk Security Essentials (SSE) with my custom security content. Locally, I have the security use cases in JSON format that SSE accepts, but I want to do this integration through my private GitLab by uploading these security use cases there. However, there is a need to keep this GitLab private, so I can't just make SSE download the formatted JSON content by simply passing it the URL in the `content_download_url` setting from `essentials_update.conf`. Is there a setting in the `essentials_update.conf` file, or in some other file that I can also include an access token for my GitLab? If not, what other ways can I download content from this private GitLab page in order to integrate with SSE?  
Hey Splunkers, I am not sure if this is possible or not but what i was trying to do is something like passing the values of search in the eval command to basically form a statement or  an event . ... See more...
Hey Splunkers, I am not sure if this is possible or not but what i was trying to do is something like passing the values of search in the eval command to basically form a statement or  an event . So for example consider below search returns multiple users first name, last name and country details. Now with that field values what i am trying to do is create a eval statement like below- index=foo source=user_detail |table first_name  last_name country |eval statement = My name is "$first_name $ $last_name$ and i come from $country$ |table statement   But this is not passing those field values to eval statement, so anyone knows if there is a way we can do this ? Thanks.
Good Morning, I am working on connecting my Splunk Cloud (trial at the moment, purchase coming soon) to Jira Cloud free and I'm able to retrieve all Jira data in Splunk (projects, issue types, autof... See more...
Good Morning, I am working on connecting my Splunk Cloud (trial at the moment, purchase coming soon) to Jira Cloud free and I'm able to retrieve all Jira data in Splunk (projects, issue types, autofill when setting alerts) but the alerts that are supposed to create tickets are still pending. I can see Splunk accessing the API on the Jira side, none of the troubleshooting steps here helped. I tried the other Jira Splunk add on and that one wouldn't even function so I got farther with this one but still just short of working. Any ideas? Is it something simple I'm missing? Thank you! As an aside: Are any of you aware of any other Splunk to free ticketing add-ons out there? Can't get Alert Manager to create Incidents on Splunk Cloud (just the trial? not sure). Trying to get Incidents created from Alerts and nothing seems to be fitting the bill. Thank you!
Is it possible to run scripted input on the search peer? Also, is it possible to ensure it runs from all search peers ? Thanks ahead of time. 
I've got a query I want to run on a daily basis, and write the results to a lookup (# of results once per day) then, I want to be able to query that lookup to pull the last 7 days counts. Is this... See more...
I've got a query I want to run on a daily basis, and write the results to a lookup (# of results once per day) then, I want to be able to query that lookup to pull the last 7 days counts. Is this possible? Is there a better way? I have a lookup file of IDS exclusions I am constantly updating and I want to be able to see how many results from the search I had each day; if I run the search at the end of the 7 days it wont be accurate because it would be against the lookup after 7 days of updates, so if I had 20 results on Monday and put something in the lookup that excluded those 20, I wouldn't have visibility when I ran it the next day since the lookup would exclude those 20 results. I was thinking if I could store the count somewhere each day and query that later, I wouldn't need to run anything against the exclusions lookup, I could just pull the historical counts I wrote.   Sorry if I am overcomplicating this, I'm new to Splunk so if there is a better way to do it please let me know!
We have been using Microsoft Azure App for Splunk for some time however recently it has started throwing the 404 page not found error. I tried rolling back to a previous version but still getting the... See more...
We have been using Microsoft Azure App for Splunk for some time however recently it has started throwing the 404 page not found error. I tried rolling back to a previous version but still getting the same thing. This is also happening with Microsoft 365 App for Splunk. Both were working but I can't figure out what has changed to cause them to throw the 404. CentOS 7 Splunk Enterprise 8.2.6 Thanks! Aaron
We have just started using the IT Essentials App, we are generating alarms based on thresholds being breached, the thresholds only seem to be available when for example a CPU peaks at 90%, what i am ... See more...
We have just started using the IT Essentials App, we are generating alarms based on thresholds being breached, the thresholds only seem to be available when for example a CPU peaks at 90%, what i am looking for is generating an alarm for when CPU peaks at 100% for a period of 10 mins.   Below is my spl, would using time_window = 15m suffice ?    | mstats max(ps_metric.pctCPU) as val WHERE index = em_metrics OR index = itsi_im_metrics by host span=5m | eval val=100-val | rename host as host | eval host="host=".$host$ , id="ta_nix" | lookup itsi_entities entity_type_ids as id _itsi_identifier_lookups as host OUTPUT _key as entity_key, title, _itsi_informational_lookups as info_lookup, _itsi_identifier_lookups as alias_lookup | search entity_key != NULL | eval entity_type="Unix/Linux Add-on" | eval metric_name="CPU Usage Percent" | eval itsiSeverity=case(val <= 75, 2, val <= 90 and val > 75, 4, val > 90, 6) | eval itsiAlert=metric_name." alert for ".entity_type." entity type" | eval itsiDrilldownURI="/app/itsi/entity_detail?entity_key=".entity_key | eval itsiInstance=title | eval entity_title=title | eval itsiNotableTitle=title | eval val = round(val, 2) | eval itsiDetails = metric_name + " current value is " + val | eval sec_grp=default_itsi_security_group | eval alert_source="entity_type" | where IsNull(is_entity_in_maintenance) OR (is_entity_in_maintenance != 1) | fields - host  
Hi All, My query is if we put indexed_time=json in props.conf at HF where we are ingesting events via HEC input. And put KV_mode=none in props.conf on SH. Will it extract any custom field during SH... See more...
Hi All, My query is if we put indexed_time=json in props.conf at HF where we are ingesting events via HEC input. And put KV_mode=none in props.conf on SH. Will it extract any custom field during SH or not? Thanks in Advance
On the page "Configure data collection using a REST API call" there is a section about adding setup parameters. However, on the shell input page "Configure data collection using a shell command" ther... See more...
On the page "Configure data collection using a REST API call" there is a section about adding setup parameters. However, on the shell input page "Configure data collection using a shell command" there is no such section. There is a section about adding input parameters, but it's not the same. The reason I'm asking is because I'm trying to add setup parameterts to a shell input, and I just get error messages in the final validation page, no matter what I do. Should it be the same syntax as for REST-inputs, or is it different for shell inputs? See attached screenshot of what I'm trying to do. I've already tried the following versions of the parameter syntax, but I get the same error messages for all of them,  and yes, I've added values to the parameters (in the Add-on Setup Parameters tab).   ${__settings__.additional_parameters.my_parameter} ${additional_parameters.my_parameter} ${my_parameter}   Also, I get it to work if I switch to input parameters instead, but in this case I want to use setup parameters, as I'm planning to re-use the parameters in several inputs.
Hi,  we have some data that contains a hierarchy of folders that we want to extract from the source path, the raw data looks like this :  source= /usr/local/intranet/areas/ua1/output/MUN we would... See more...
Hi,  we have some data that contains a hierarchy of folders that we want to extract from the source path, the raw data looks like this :  source= /usr/local/intranet/areas/ua1/output/MUN we would like to create 2 regex to extract the "intranet" and the "output" Can someone please help Thanks
Hi all, We are trying to show the bytes/s, averaged over 15 mins.  I'm getting far lower results if I use per_second than a live timechart with a span of 1s, so: index="datafeed" | where isnotnull(... See more...
Hi all, We are trying to show the bytes/s, averaged over 15 mins.  I'm getting far lower results if I use per_second than a live timechart with a span of 1s, so: index="datafeed" | where isnotnull(bytes) | timechart span=15m per_second(bytes) Gives an average of 10mb/s Whereas: index="datafeed" | where isnotnull(bytes) | timechart span=1s sum(bytes) Shows the data constantly hovering around the 100mb/s mark, so the 15 min average must be up at that level.  Am I missing something obvious?   Thanks for any pointers!