All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk Gurus, I am trying to generate tabular data for the API requests. Following is the query to extract below table data The FirstComp, SecondComp and ThirdComp are fields extracted at ru... See more...
Hello Splunk Gurus, I am trying to generate tabular data for the API requests. Following is the query to extract below table data The FirstComp, SecondComp and ThirdComp are fields extracted at run time from log. index=micro host=app150*usa.com "API Timeline" | rex field=_raw "FirstCompTime:(?<FirstComp>[^\,]+)" | rex field=_raw "SecondCompTime:(?<SecondComp>[^\,]+)" | rex field=_raw "ThirdCompTime:(?<ThirdComp>[^\,]+)" | table FirstComp, SecondComp, ThirdComp FirstComp SecondComp ThirdComp 78 25 31 80 22 34 81 26 36   Now I need to calculate the 95th and 99th percentile and making sure components name appear as part of first column as shown  below- Components 95th percentile 99th percentile FirstComp 77 79 SecondComp 23 24 ThirdComp 32 35   The desired output should show 99th percentile & 95th percentile by different component id. So eventually i want to bring column headers name as part of first column's value and next two column should have respective 99th percentile & 95th percentile. Thanks in advance for your time and help. Tanzy
We have recently migrated from On-prem to Splunk cloud.  Current setup is : UFs ( several of them) --> 2x HFs --> Splunk Cloud On one of the UF hosts, my outputs.conf is currently configured to sen... See more...
We have recently migrated from On-prem to Splunk cloud.  Current setup is : UFs ( several of them) --> 2x HFs --> Splunk Cloud On one of the UF hosts, my outputs.conf is currently configured to send events to both the ON-prem Indexer ( for business reasons) and the HF. This is so that, the events are seen on both On-prem indexer and Cloud indexer for a short period of time until On-prem is retired completed. When I query on the index on  on-prem and cloud for the same time interval, I see my events breaking is interpreted differently on Cloud compared to on-prem even though both search heads have the same props.conf for my source type. For example, for a particular search string on both search heads on the same index, i get different event counts. Also, on the cloud, my events are not breaking as per the regex defined for line breaking . And some events are completely missing on Cloud even though I can see them  on on-prem search head. Props.conf snippet: DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 12 NO_BINARY_CHECK = true TIME_FORMAT = %H:%M:%S.%3N category = Custom pulldown_type = 1   Do I need to be putting a copy of props.conf on HF also? If yes, must I put every props.conf of every source type in HF? Or only under certain specific conditions?
Hi, we have two fields, latency and partition. I want to graph the latency vs. time on one of our machines for each of our partitions. So the resulting panel will have four separate line graphs. The ... See more...
Hi, we have two fields, latency and partition. I want to graph the latency vs. time on one of our machines for each of our partitions. So the resulting panel will have four separate line graphs. The log output we have is:    As you can see, there are 4 different partitions 00-BTECEU, 01-BTECEU, 02-BTECEU, 03-BTECEU. I was able to parse for the partition, but I'm having issues separating the latency for each partition.    Here's the query I tried, but the resulting table is empty:   host=irmnrjeb0227d AND latency | reverse | rex field=name "Partition\s(?&lt;pName&gt;\d\d\W\w+)" | streamstats current=f last(latency) as previouslatency by pName | rename latency as currentLatency | eval delta = currentLatency - previouslatency | timechart span=30s first(delta) as Delta by pName   Any help is appreciated, thanks!  
Hi I have field in my log that call “MobileNumber” that need to show count of MobileNumber by location on map. e.g: 00121234567  Area code:0012 Number:1234567 if area code belong to Berlin 0151,... See more...
Hi I have field in my log that call “MobileNumber” that need to show count of MobileNumber by location on map. e.g: 00121234567  Area code:0012 Number:1234567 if area code belong to Berlin 0151, 0157 or 0173. show total count of area code that belong Berlin on map. if area code belong to Wolfsburg 0361 show total count of area code that belong Wolfsburg on map FYI: Latitude, Longitude not exist in log file.   Any idea? Thanks
We're ingesting Tomcat logs, and looking for items tagged [SEVERE]. I'd like to be able to pull a report of error rate, and look for errors which are occurring at a significantly higher than average ... See more...
We're ingesting Tomcat logs, and looking for items tagged [SEVERE]. I'd like to be able to pull a report of error rate, and look for errors which are occurring at a significantly higher than average rate *for their error type*. In addition, we're getting data streams from multiple hosts, each of which are really their own instance and have their own "native" error rate.  I need the average rate of occurrence of errors over the last week, and over the last day, grouped by host and error type. Then I need to flag any error who's rate has risen by... say, 500%.  So far the best I've come up with is this:  index="tomcat" sourcetype="catalina.out_logs" SEVERE earliest=-7d latest=-1d | rex field=_raw "\[SEVERE[\s\t]+\][\s\t]+(?<err>.+)[\n\r]" | eval errhost = host + "::" + err | bucket _time span=1h | stats count as tcount by errhost | stats avg(tcount) as wk_avg by errhost | appendcols [search index="tomcat" sourcetype="catalina.out_logs" SEVERE earliest=-1d latest=now() | rex field=_raw "\[SEVERE[\s\t]+\][\s\t]+(?<err>.+)[\n\r]" | eval errhost = host + "::" + err | bucket _time span=1h | stats count as tcount by errhost | stats avg(tcount) as day_avg by errhost] | eval perc= round(((day_avg - wk_avg) * 100 / wk_avg),2) | fields + perc errhost | search perc > 500.0 | search NOT "Unable to serve form" | sort perc desc   So - I'm pulling SEVERE errors, extracting just the error text, CONCATing that to the host to get my group-by string, bucketing in 1-hour increments to get an average, then building a chart with the 7-day average and the 1-day average for each host/error pair.  Wondering if anyone else has a better way to do it? Thanks!
Hi what is the rex for mq error code  here is the log: 2021-10-08 06:52:12,785 ERROR TOPIC-00000 [Utility] MQJCA1011: Failed to allocate a JMS connection.   error code=MQJCA1011 error descripti... See more...
Hi what is the rex for mq error code  here is the log: 2021-10-08 06:52:12,785 ERROR TOPIC-00000 [Utility] MQJCA1011: Failed to allocate a JMS connection.   error code=MQJCA1011 error description=Failed to allocate a JMS connection   Thanks,
Hi, I am making a search similar to this index=abc sourcetype=xyz "searchkeyword" Search result should contain only events that match the searchkeyword like  <timestamp>| trace ......searchk... See more...
Hi, I am making a search similar to this index=abc sourcetype=xyz "searchkeyword" Search result should contain only events that match the searchkeyword like  <timestamp>| trace ......searchkeyword....... but getting events around that time along with matched events <timestamp>| trace ......someotherresult....... <timestamp>| trace ......searchkeyword....... <timestamp>| trace ......someotherresult....... <timestamp>| trace ......someotherresult.......
Workday Add-on for Splunk (https://splunkbase.splunk.com/app/3783/#/details version 1.0.2) to support Multi Tenant setting. Trying to rename TA-workday to TA-workday44 (instead of TA-workday_tmp_1) ... See more...
Workday Add-on for Splunk (https://splunkbase.splunk.com/app/3783/#/details version 1.0.2) to support Multi Tenant setting. Trying to rename TA-workday to TA-workday44 (instead of TA-workday_tmp_1) as per the steps in https://community.workday.com/comment/860853#comment-860853 (to support multi tenants collection) using following steps. Files to be updated:           TA-workday_tmp_1\app.manifest (3 hits) Line 4: "title": "Workday Add-on for Splunk (tmp_1)", Line 7: "name": "TA-workday_tmp_1", Line 18: "description": "Workday Add-on for Splunk (tmp_1)", TA-workday_tmp_1\appserver\static\js\build\globalConfig.json (4 hits) Line 3: "name": "TA-workday_tmp_1", Line 7: "restRoot": "TA_workday_tmp_1" Line 294: "name": "workday_tmp_1", Line 295: "title": "workday_tmp_1", TA-workday_tmp_1\bin\ta_workday_tmp_1_declare.py (2 hits) Line 17: ta_name = 'TA-workday_tmp_1' Line 18: ta_lib_name = 'ta_workday_tmp_1' TA-workday_tmp_1\bin\TA_workday_tmp_1_rh_settings.py (2 hits) Line 2: import ta_workday_tmp_1_declare Line 150: 'ta_workday_tmp_1_settings', TA-workday_tmp_1\bin\TA_workday_tmp_1_rh_workday_tmp_1.py (2 hits) Line 2: import ta_workday_tmp_1_declare Line 63: 'workday_tmp_1', TA-workday_tmp_1\bin\workday_tmp_1.py (6 hits) Line 1: import ta_workday_tmp_1_declare Line 14: import input_module_workday_tmp_1 as input_module Line 30: super(ModInputworkday, self).__init__("ta_workday_tmp_1", "workday_tmp_1", use_single_instance) Line 36: scheme.title = ("workday_tmp_1") Line 60: return "TA-workday_tmp_1" TA-workday44/bin (added the step) mv ta_workday to ta_workday44 TA-workday_tmp_1\default\app.conf (5 hits) Line 11: description = Workday Add-on for Splunk (tmp_1) Line 15: label = Workday Add-on for Splunk (tmp_1) Line 19: id = TA-workday_tmp_1 Line 23: reload.ta_workday_tmp_1_account = simple Line 24: reload.ta_workday_tmp_1_settings = simple TA-workday_tmp_1\default\collections.conf (1 hit) Line 1: [TA_workday_tmp_1_checkpointer] TA-workday tmp 1\default\data\ui\views\configuration.xml (1 hit) Line 2: <view template="TA-workday_tmp_1:templates/base.html" type="html" isDashboard="False"> TA-workday_tmp_1\default\data\ui\views\home.xml (2 hits) Line 2: <label>TA-Workday_tmp_1_Add-on_for_Splunk</label> Line 3: <description>This is an example view in add-on TA-Workday_tmp_1_Add-on_for_Splunk</description> TA-workday_tmp_1\default\data\ui\views\inputs.xml (1 hit) Line 2: <view template="TA-workday_tmp_1:templates/base.html" type="html" isDashboard="False"> TA-workday_tmp_1\default\inputs.conf (1 hit) Line 1: [workday_tmp_1] TA-workday_tmp_1\default\restmap.conf (10 hits) Line 2: [admin:TA_workday_tmp_1] Line 4: members = TA_workday_tmp_1_settings, TA_workday_tmp_1_workday_tmp_1 Line 6: [admin_external:TA_workday_tmp_1_settings] Line 8: handlerfile = TA_workday_tmp_1_rh_settings.py Line 12: [admin_external:TA_workday_tmp_1_workday_tmp_1] Line 14: handlerfile = TA_workday_tmp_1_rh_workday_tmp_1.py TA-workday_tmp_1\default\web.conf (8 hits) Line 2: [expose:TA_workday_tmp_1_settings] Line 3: pattern = TA_workday_tmp_1_settings Line 6: [expose:TA_workday_tmp_1_settings_specified] Line 7: pattern = TA_workday_tmp_1_settings/* Line 10: [expose:TA_workday_tmp_1_workday] Line 11: pattern = TA_workday_tmp_1_workday Line 14: [expose:TA_workday_tmp_1_workday_specified] Line 15: pattern = TA_workday_tmp_1_workday/* TA-workday_tmp_1\README\inputs.conf.spec (1 hit) Line 1: [workday_tmp_1://<name>]           but still getting:           10-11-2021 14:29:03.878 +0000 ERROR AdminManagerExternal [6624 TcpChannelThread] - Stack trace from python handler: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper for name, data, acl in meth(self, *args, **kwargs): File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/splunktaucclib/rest_handler/handler.py", line 168, in get output_mode='json', File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_fval = f(*args, **kwargs) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44           Any pointers? Thanks.
We have Cloud App Security (CAS) Alerts set to be pulled at an interval of every 5 minutes. This was the default when we set it up. What we're finding is that the same alert is being pulled every 5 m... See more...
We have Cloud App Security (CAS) Alerts set to be pulled at an interval of every 5 minutes. This was the default when we set it up. What we're finding is that the same alert is being pulled every 5 minutes. The alert ID and URL are both identical, so is all the content. Here's the inputs.conf snippet: [splunk_ta_o365_cloud_app_security://CAS_Alerts] content_type = Alerts index = o365 interval = 300 We're using the latest version of this app: https://splunkbase.splunk.com/app/4055/ From the log it does appear that the start time is incrementing, but something still isn't working as designed: 2021-10-11 11:57:29,339 level=INFO pid=30282 tid=MainThread logger=splunksdc.collector pos=collector.py:run:251 | | message="Modular input started." 2021-10-11 11:57:29,357 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2021-10-11 11:57:29,386 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_cas_token_by_psk:177 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Acquire cloud application access token success." 2021-10-11 11:57:29,653 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:_create_consumer:139 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Start retrieving Cloud Application Security messages." portal_url=b'<redacted>' portal_region=b'<redacted>' 2021-10-11 11:57:29,653 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:run:46 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Start recording Cloud Application Security messages." source=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' 2021-10-11 11:57:29,653 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get:630 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Calling Cloud Application Security API." url=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' params={'$filter': datetime.datetime(2021, 10, 11, 16, 57, 29, 653529)} 2021-10-11 11:57:50,766 level=INFO pid=30282 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited." 2021-10-11 12:02:26,129 level=INFO pid=31032 tid=MainThread logger=splunksdc.collector pos=collector.py:run:251 | | message="Modular input started." 2021-10-11 12:02:26,274 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2021-10-11 12:02:26,303 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_cas_token_by_psk:177 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Acquire cloud application access token success." 2021-10-11 12:02:26,574 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:_create_consumer:139 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Start retrieving Cloud Application Security messages." portal_url=b'<redacted>' portal_region=b'<redacted>' 2021-10-11 12:02:26,574 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:run:46 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Start recording Cloud Application Security messages." source=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' 2021-10-11 12:02:26,574 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get:630 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Calling Cloud Application Security API." url=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' params={'$filter': datetime.datetime(2021, 10, 11, 17, 2, 26, 574478)} 2021-10-11 12:02:48,135 level=INFO pid=31032 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited."
How will I set up a data model that has Authentication and sub-sessions Default, insecure and Privileged Authentication data model. It uses action of a sucess and failure.  I am using the following s... See more...
How will I set up a data model that has Authentication and sub-sessions Default, insecure and Privileged Authentication data model. It uses action of a sucess and failure.  I am using the following splunk query:     (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) NOT "pam_unix(sshd:auth): authentication failure;"   Is there anything wrong with the above query or how can I make it to work?  
I work in a very large environment with Splunk Ent. + ES. Use case are forgotten & no one has time to deal with them. So am trying to create a plan to use them to better our environment. Please advis... See more...
I work in a very large environment with Splunk Ent. + ES. Use case are forgotten & no one has time to deal with them. So am trying to create a plan to use them to better our environment. Please advise. Thx a million.
One of employee left company. Now all reports & alerts owned by him are not visible in splunk .  We have splunk 7.3.3 in our environment How can i search those alerts / reports as they are very im... See more...
One of employee left company. Now all reports & alerts owned by him are not visible in splunk .  We have splunk 7.3.3 in our environment How can i search those alerts / reports as they are very important for us as we modify those alerts / report periodically Regards, Nilesh  
Hello, I have struggled with alerting a specific search I've made.   EVENT_TYPE="Login" LOGIN_STATUS=* [search EVENT_TYPE="Login" LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT" | stats count by U... See more...
Hello, I have struggled with alerting a specific search I've made.   EVENT_TYPE="Login" LOGIN_STATUS=* [search EVENT_TYPE="Login" LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT" | stats count by USER_ID | table USER_ID] | stats latest(LOGIN_STATUS) AS LOGIN_STATUS latest(USER_NAME) AS USER_NAME latest(UserAccountId) AS "Account Id" latest(USER_TYPE) AS "User Type" latest(TIMESTAMP) AS "Time stamp" by USER_ID | where LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT" Which results in this I have tried number of results > 0 search USER_ID> 0 I tried using field tokens such as $RESULT.userid$ > 0  Does anyone know how I can edit my search or trigger to actually trigger when I receive results?
I am on the cluster master and in the bucket status tab.I see the number of "fixup tasks pending" . for everything the status is "Cannot fix search count as the bucket hasn't rolled yet" . I tried se... See more...
I am on the cluster master and in the bucket status tab.I see the number of "fixup tasks pending" . for everything the status is "Cannot fix search count as the bucket hasn't rolled yet" . I tried selecting action and clicked roll but it is not working.   Any suggestions?....Does restarting cluster master helps? .... I see the SF,RF met and data is searchable but still these bucket fixup tasks are present and the number keeps increasing
Hello. If our Enterprise Splunk license expired and we disabled our ingest, so no license violations were received, would Splunk historical data still be fully searchable, WITHOUT applying free lice... See more...
Hello. If our Enterprise Splunk license expired and we disabled our ingest, so no license violations were received, would Splunk historical data still be fully searchable, WITHOUT applying free license or applying another Enterprise license? Thanks,
Hello, regarding https://docs.splunk.com/Documentation/Splunk/8.2.2/ReleaseNotes/FixDatetimexml2020 - in case we upgrade Splunk version does /etc/datetime.xml is still less priority than pushed patc... See more...
Hello, regarding https://docs.splunk.com/Documentation/Splunk/8.2.2/ReleaseNotes/FixDatetimexml2020 - in case we upgrade Splunk version does /etc/datetime.xml is still less priority than pushed patch app on shc/idxc/UFs ? Looking at https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Wheretofindtheconfigurationfiles I may understand /etc is lowest priority? How do you handle it if you keep V7 universal forwarders and upgrade servers to 8.2.2 version? Thanks.
hi what is the rex for error number and error message of this log:  (separately) 23:43:51.411 app module: 100:  Not Found!    error number= 100 error message=  Not Found!   Thanks
Hi, I am new to dashboard studio . How to create a drill down from bar chart to a single value chart dynamically in dashboard studio.Bar chart has sourcetypes( Pass,fail,error) counts  in y axis an... See more...
Hi, I am new to dashboard studio . How to create a drill down from bar chart to a single value chart dynamically in dashboard studio.Bar chart has sourcetypes( Pass,fail,error) counts  in y axis and count in x-axis. The single value chart have to show independent counts of Pass,fail,error upon selection in bar chart. 
Hi all, I have  indexer clusters at multiple sites, but common Search Heads. I want to remove certain indexes which holds no data. How can I find in which indexer cluster's indexes.conf , the cert... See more...
Hi all, I have  indexer clusters at multiple sites, but common Search Heads. I want to remove certain indexes which holds no data. How can I find in which indexer cluster's indexes.conf , the certain indexes entries are present. Also in which deployment app of which indexer the related monitoring stanza is present. Is there any rest api command to get the details? Thanks in Advance  
Query to see which application has followed which integration method to on board the data on Splunk cloud like HEC,TCP,DB connect