All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I setup half a dozen serverclasses leveraging CMDB sourced .csv and whitelist.from_pathname This works great to managing serverclasses with thousands of clients, though I was surprised these serve... See more...
I setup half a dozen serverclasses leveraging CMDB sourced .csv and whitelist.from_pathname This works great to managing serverclasses with thousands of clients, though I was surprised these serverclasses whitelisted clients would get stale unless I'd issue a reload deploy-server / reload deploy-server -class Couldn't find any flag on the DS allowing whitelist.from_pathname to update whenever the linked .csv would update. Is there any better way ?
Recently we've been noticing a lot of searches have been getting connection timeouts when trying to query our indexer cluster. We keep getting the message: 2 errors occurred while the search was ex... See more...
Recently we've been noticing a lot of searches have been getting connection timeouts when trying to query our indexer cluster. We keep getting the message: 2 errors occurred while the search was executing. Therefore, search results might be incomplete. Hide errors. Error connecting: Connect Timeout Timeout error. Timed out waiting for peer searchpeer01. Search results might be incomplete! If this occurs frequently, receiveTimeout in distsearch.conf might need to be increased. Delving into the search.log, we see that we are getting 502 Bad Gateway from the indexer cluster: 06-28-2021 12:45:14.663 ERROR SearchResultTransaction - Got status 502 from https://10.0.0.43:8089/services/streams/search?sh_sid=scheduler__username_aW52X2NpdF9zbm93X3NlYXJjaA__RMD565f4e7f87d23277d_at_1624880700_38630 06-28-2021 12:45:14.663 ERROR SearchResultParser - HTTP error status message from https://10.0.0.43:8089/services/streams/search?sh_sid=scheduler__username_aW52X2NpdF9zbm93X3NlYXJjaA__RMD565f4e7f87d23277d_at_1624880700_38630: Error connecting: Connect Timeout 06-28-2021 12:45:14.663 WARN SearchResultCollator - Failure received on retry collector. _unresolvedRetries=1 06-28-2021 12:45:14.663 WARN SearchResultParserExecutor - Error connecting: Connect Timeout Timeout error. for collector=searchpeer01 06-28-2021 12:45:14.663 ERROR DispatchThread - sid:scheduler__username_aW52X2NpdF9zbm93X3NlYXJjaA__RMD565f4e7f87d23277d_at_1624880700_38630 Timed out waiting for peer searchpeer01. Search results might be incomplete! If this occurs frequently, receiveTimeout in distsearch.conf might need to be increased. Considering  the receiveTimeout is 600 seconds, I don't think that will change anything. I'm not sure where these 502 errors are coming from or what to do about them? Does anyone have any insight into what may be happening? Running version 8.1.3 on the search head and 7.3.3 on the indexer cluster (though planning to upgrade to 8.1.4 as soon as we are able to).   Thanks!
Hi Team, I have a dashboard where existing results showing Event date, Event title, email id, Logon IP, Logon Location, AD Location. The condition here is I need to remove the Logon IP used by more... See more...
Hi Team, I have a dashboard where existing results showing Event date, Event title, email id, Logon IP, Logon Location, AD Location. The condition here is I need to remove the Logon IP used by more than 20+users from my current dashboard and display only Logon IP used by less than 20+ users   EG:     index=ert  "192.34.23.122" earliest=-30d | stats dc(user) as "Distinct users" Using above query if the logon ip 192.34.23.122 used by more than 20+ users then my dashboard doesn't show up. EG:     index=ert  "192.34.23.122" earliest=-30d | stats dc(user) as "Distinct users" Using above query if the logon ip 192.34.23.122 used by less than 20+ users then my dashboard should show up. Please suggest suitable SPL query for this.
I have see below error messages in my search head cluster members .i am using 8.2v.can i get some resolution for this.   06-29-2021 08:54:29.880 +0100 ERROR SHCMasterHTTPProxy [5358 SHPHeartbeatThr... See more...
I have see below error messages in my search head cluster members .i am using 8.2v.can i get some resolution for this.   06-29-2021 08:54:29.880 +0100 ERROR SHCMasterHTTPProxy [5358 SHPHeartbeatThread] - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/members captain=blXXX:8089 rc=0 actual_response_code=500 expected_response_code=201 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">Cannot add peer=XXXX mgmtport=8089 (reason: removeOldPeer peer=FD52AB8F-AF38-47E3-BDB6-C16D42E8AFB4, serverName=blt14788004, hostport=XX:8089, but found different peer=3F2BAA5B-1792-4FE6-9393-499B8DAF8D33 with serverName=XXXXX and hostport=10.45.10.74:8089 already registered and UP)</msg>\n </messages>\n</response>\n"
Hi  from this log: 23:52:52.758 alex appinfo: Terminating due to signal: 1   How can I extract these item with rex: user=alex appname=appinfo signal=1   Thanks,
Hi,   I have a chart with two x series. One them is a bar chart, and other line chart.  I would like that if I link on a bar appear a specific search, and if I link in a point of line chart appear... See more...
Hi,   I have a chart with two x series. One them is a bar chart, and other line chart.  I would like that if I link on a bar appear a specific search, and if I link in a point of line chart appear other specific search.
Hi all, in splunk there is always this icon next to your user for the "Health of Splunk Deployment". You can change these indicators and futures or their teshholds, but I can't find anything about ... See more...
Hi all, in splunk there is always this icon next to your user for the "Health of Splunk Deployment". You can change these indicators and futures or their teshholds, but I can't find anything about what splunk actually does in the background to collect these values. You can find something like this in health.conf: [feature:iowait] display_name = IOWait  indicator:avg_cpu__max_perc_last_3m:description = This indicator tracks the average IOWait percentage across all CPUs on the machine running the Splunk Enterprise instance, over the last 3 minute window. By default, this indicator will turn Yellow if the percentage exceeds 1% and Red if it exceeds 3% during this window.  indicator:avg_cpu__max_perc_last_3m:red = 3 indicator:avg_cpu__max_perc_last_3m:yellow = 1 indicator:single_cpu__max_perc_last_3m:description = This indicator tracks the IOWait percentage for the single most bottle-necked CPU on the machine running the Splunk Enterprise instance, over the last 3 minute window. By default, this indicator will turn Yellow if the percentage exceeds 5% and Red if it exceeds 10% during this window.  indicator:single_cpu__max_perc_last_3m:red = 10 indicator:single_cpu__max_perc_last_3m:yellow = 5  indicator:sum_top3_cpu_percs__max_last_3m:description = This indicator tracks the sum of IOWait percentage for the three most bottle-necked CPUs on the machine running the Splunk Enterprise instance, over the last 3 minute window. By default, this indicator will turn Yellow if the sum exceeds 7% and Red if it exceeds 15% during this window.  indicator:sum_top3_cpu_percs__max_last_3m:red = 15 indicator:sum_top3_cpu_percs__max_last_3m:yellow = 7       I can´t find out how splunk generate this data and how this alert or indicator is created. There must be a kind of process like scripted input which execute a top command to look for the cpu wait time write it to the health.log in splunk ingest this log and a search which provide the information for these indicator.  
I work in XYZ firm and we're organizing a contest for some candidates from college. I'd like to setup a Splunk instance and upload some demo data there for the contest. The candidates would need to ... See more...
I work in XYZ firm and we're organizing a contest for some candidates from college. I'd like to setup a Splunk instance and upload some demo data there for the contest. The candidates would need to perform searches on the Splunk instance. That's the requirement. What would be the Splunk license that can be used to share an instance with multiple users who'll be performing searches at the same time?
Hi, I have made a an app that generate an lookup csv-file. The saved search are running good, file generated in lookups-folder. lookup table file made global. Lookup definition is made global, and th... See more...
Hi, I have made a an app that generate an lookup csv-file. The saved search are running good, file generated in lookups-folder. lookup table file made global. Lookup definition is made global, and the automatic lookup is also made global in the app. All config are saved in the default folder and pushed out with our SH deployer. When enabling the automatic lookup for my source in props.conf I get an error when searching: "Could Not Load lookup" from my indexer peers. If i do the exact same thing in the web UI, config then saved in "local" - I do not see the same errror.  When doing manual lookup in SPL "| lookup gamecatalog_lookup id as NtGameId OUTPUTNEW name status vendor gameStudio"  - results in error "Streamed search execute failed because: Error in 'lookup' command: Could not construct lookup" But when I do the sam search and add local=true the search finishes just fine and to the extraction.  What am I missing? I should be able to push automatic lookups with my own app? all permissions and sharing are global and everyone has read. 
Hello! Log: transactionId: NA, businesskey: GRNJob, environment: prod, flowName: app-report-grn-scheduler-flow, message: Computed Range for Aribus GRN Query - {"viewTemplateName":"mcdonalds_Receipt... See more...
Hello! Log: transactionId: NA, businesskey: GRNJob, environment: prod, flowName: app-report-grn-scheduler-flow, message: Computed Range for Aribus GRN Query - {"viewTemplateName":"mcdonalds_Receipt_updatedRange", "filters": { Based on the above log, I need to search in any logs for the message: "anything". Please help the regex to find out.
Hello Guys First let me please thank you for all the help I get from you guys... you people rock!!!! I am trying to extract a code that is inside a string that reads as follows: BOX="|autx_path\I... See more...
Hello Guys First let me please thank you for all the help I get from you guys... you people rock!!!! I am trying to extract a code that is inside a string that reads as follows: BOX="|autx_path\IUIUXX-8569545|" I want to be able to extract the numbers at the end and also the first 3 characters to the left of the numbers so his would give me:  XX-8569545 as "XX-" are the 3 first characters on the left side of the numbers... is this even possible in splunk? thank you much for your help guys Love, Cindy
Hi Team, I have created a lookup and KV store in the deployer, when I execute the below bundle push command, the lookups and kvstore are not getting pushed to search heads. ./splunk apply shcluster... See more...
Hi Team, I have created a lookup and KV store in the deployer, when I execute the below bundle push command, the lookups and kvstore are not getting pushed to search heads. ./splunk apply shcluster-bundle -target https://IP:8089 -auth admin:password  
Hi, I want to look at each response_time value for each Tier, and count the amount of response times that are above and below the MaxResponseTime that corresponds to each separate Tier. I have 5 Tie... See more...
Hi, I want to look at each response_time value for each Tier, and count the amount of response times that are above and below the MaxResponseTime that corresponds to each separate Tier. I have 5 Tiers (categories) with all different MaxResponseTime values. Here's the search so far: | datamodel metric summariesonly=true search | search "metric.date"=2021-06-28 | rename "metric.date" as date | rename "metric.Tier" as Tier | rename "metric.response_time" as response_time | stats values(response_time) by Tier | rename values(response_time) as response_time
It looks to be that in version 3.1.1 the defaults for AAD Sign Ins swaps away from BETA --> 1.0 which looks to not be providing authentication_method (MFA/2FA) information. This default change in ... See more...
It looks to be that in version 3.1.1 the defaults for AAD Sign Ins swaps away from BETA --> 1.0 which looks to not be providing authentication_method (MFA/2FA) information. This default change in behaviour can be seen in this file 'input_module_MS_AAD_signins.py' BEFORE:  url = graph_base_url + "/beta/auditLogs/signIns?$orderby=createdDateTime&$filter=createdDateTime+ge+%s+and+createdDateTime+le+%s" % (query_date, end_date.strftime('%Y-%m-%dT%H:%M:%S.%fZ')) AFTER: url = graph_base_url + "/%s/auditLogs/signIns?$orderby=createdDateTime&$filter=createdDateTime+ge+%s+and+createdDateTime+le+%s" % (endpoint, query_date, end_date.strftime('%Y-%m-%dT%H:%M:%S.%fZ')) For anyone who really needs/wants authentication_method information I strongly encourage you to back to your INPUTS and change the dropdown back to BETA. These seem to have been dropped by MS in v1 . unless BETA is ahead .. in which case they will be and all that is required is to change the INPUT
I have a Splunk cloud environment (production) from which I want to migrate all my knowledge objects to my non-prod Splunk cloud instance without leveraging Splunk cloud support.. Are there any links... See more...
I have a Splunk cloud environment (production) from which I want to migrate all my knowledge objects to my non-prod Splunk cloud instance without leveraging Splunk cloud support.. Are there any links, process, or any information around this type of migration?
Hello everyone I hope you guys are doing well   I have a sort of simple question but I have not been able to sort a solution.. I want to filter out rows of a table where there are multivalues based... See more...
Hello everyone I hope you guys are doing well   I have a sort of simple question but I have not been able to sort a solution.. I want to filter out rows of a table where there are multivalues based on a numeric criteria, this is an example: I have this: AGENT INX ROCKS TASK XX_9 7 9 -6 T Y U TY-8 GY-0 FG-67 XX_10 7 -49 -66 UY IO UJI TY-8E G-0 VG-67   I would like to only remove all rows in the table where  the multivalue field "INX" have negative numbers and have something like this: AGENT INX ROCKS TASK XX_9 7 9 T Y TY-8 GY-0 XX_10 7 UY TY-8E   I have tried using mvfilter and mvfind and mvindex but... every trial has not been successful yet so I really love you guys for helping me out thanks a LOTTTT kindly, Cindy
We are starting to get workflows for Jira/Confluence up and running and we purchased Splunk Enterprise Security/Phantom.   I was just looking for ideas on workflows between the 3?  To make the best ... See more...
We are starting to get workflows for Jira/Confluence up and running and we purchased Splunk Enterprise Security/Phantom.   I was just looking for ideas on workflows between the 3?  To make the best use of all of them, and of course,  the least amount of work for myself and my team.  Although I don't mind doing a lot of the work up front if I see it's going to help down the road.    Thank you!!  
So I'm sorry if this is a rather stupid question, but I have been thrown into creating a dashboard and I've only taken a couple virtual courses on Splunk and I don't remember this being covered. I kn... See more...
So I'm sorry if this is a rather stupid question, but I have been thrown into creating a dashboard and I've only taken a couple virtual courses on Splunk and I don't remember this being covered. I know how to create dashboards from searches, however I need to create a dashboard from something I'm pulling up through the incident review search, or if I group the events into an investigation create a dashboard from those results.  Alternatively, is there a way to figure out exactly what the search string of the index review is using, as if there is I would know how to go from there, but I've tried doing searches through the indexes and sources I feel are most commonly used and I can't get the results I get in incident review.
I have 2 data sets   index=support source=sites earliest=-1d@d latest=-0d@d index=support source=sites earliest=-0d@d latest=now   I want to pull out that data which is changed in data set 2 as... See more...
I have 2 data sets   index=support source=sites earliest=-1d@d latest=-0d@d index=support source=sites earliest=-0d@d latest=now   I want to pull out that data which is changed in data set 2 as compared to data set 1
Hi, I have the following value in a field which needs to be split into multiple fields, Classname:  abc.TestAutomation.NNNN.Specs.Prod/NDisableTransactionalAccessUsers.#()::TestAssembly:abc.TestAu... See more...
Hi, I have the following value in a field which needs to be split into multiple fields, Classname:  abc.TestAutomation.NNNN.Specs.Prod/NDisableTransactionalAccessUsers.#()::TestAssembly:abc.TestAutomation Required output: Productname : abc.TestAutomation.NNNN.Specs.Prod Feature name : NDisableTransactionalAccessUsers Project : TestAssembly:abc.TestAutomation I have been trying to extract the values into my fields using REX command, but I am failing. source="Reports.csv"  index="prod_reports_data" sourcetype="ReportsData"                                                      |rex "classname(?<Productname>/*)\.(?<Featurename>#*)\.(?<Project>.*)" |table classname Productname Featurename Project While I execute this command, there are no results.  I am very new to Splunk, can someone guide.  Thanks.