All Topics

Top

All Topics

Hi what is the rex for mq error code  here is the log: 2021-10-08 06:52:12,785 ERROR TOPIC-00000 [Utility] MQJCA1011: Failed to allocate a JMS connection.   error code=MQJCA1011 error descripti... See more...
Hi what is the rex for mq error code  here is the log: 2021-10-08 06:52:12,785 ERROR TOPIC-00000 [Utility] MQJCA1011: Failed to allocate a JMS connection.   error code=MQJCA1011 error description=Failed to allocate a JMS connection   Thanks,
Hi, I am making a search similar to this index=abc sourcetype=xyz "searchkeyword" Search result should contain only events that match the searchkeyword like  <timestamp>| trace ......searchk... See more...
Hi, I am making a search similar to this index=abc sourcetype=xyz "searchkeyword" Search result should contain only events that match the searchkeyword like  <timestamp>| trace ......searchkeyword....... but getting events around that time along with matched events <timestamp>| trace ......someotherresult....... <timestamp>| trace ......searchkeyword....... <timestamp>| trace ......someotherresult....... <timestamp>| trace ......someotherresult.......
Workday Add-on for Splunk (https://splunkbase.splunk.com/app/3783/#/details version 1.0.2) to support Multi Tenant setting. Trying to rename TA-workday to TA-workday44 (instead of TA-workday_tmp_1) ... See more...
Workday Add-on for Splunk (https://splunkbase.splunk.com/app/3783/#/details version 1.0.2) to support Multi Tenant setting. Trying to rename TA-workday to TA-workday44 (instead of TA-workday_tmp_1) as per the steps in https://community.workday.com/comment/860853#comment-860853 (to support multi tenants collection) using following steps. Files to be updated:           TA-workday_tmp_1\app.manifest (3 hits) Line 4: "title": "Workday Add-on for Splunk (tmp_1)", Line 7: "name": "TA-workday_tmp_1", Line 18: "description": "Workday Add-on for Splunk (tmp_1)", TA-workday_tmp_1\appserver\static\js\build\globalConfig.json (4 hits) Line 3: "name": "TA-workday_tmp_1", Line 7: "restRoot": "TA_workday_tmp_1" Line 294: "name": "workday_tmp_1", Line 295: "title": "workday_tmp_1", TA-workday_tmp_1\bin\ta_workday_tmp_1_declare.py (2 hits) Line 17: ta_name = 'TA-workday_tmp_1' Line 18: ta_lib_name = 'ta_workday_tmp_1' TA-workday_tmp_1\bin\TA_workday_tmp_1_rh_settings.py (2 hits) Line 2: import ta_workday_tmp_1_declare Line 150: 'ta_workday_tmp_1_settings', TA-workday_tmp_1\bin\TA_workday_tmp_1_rh_workday_tmp_1.py (2 hits) Line 2: import ta_workday_tmp_1_declare Line 63: 'workday_tmp_1', TA-workday_tmp_1\bin\workday_tmp_1.py (6 hits) Line 1: import ta_workday_tmp_1_declare Line 14: import input_module_workday_tmp_1 as input_module Line 30: super(ModInputworkday, self).__init__("ta_workday_tmp_1", "workday_tmp_1", use_single_instance) Line 36: scheme.title = ("workday_tmp_1") Line 60: return "TA-workday_tmp_1" TA-workday44/bin (added the step) mv ta_workday to ta_workday44 TA-workday_tmp_1\default\app.conf (5 hits) Line 11: description = Workday Add-on for Splunk (tmp_1) Line 15: label = Workday Add-on for Splunk (tmp_1) Line 19: id = TA-workday_tmp_1 Line 23: reload.ta_workday_tmp_1_account = simple Line 24: reload.ta_workday_tmp_1_settings = simple TA-workday_tmp_1\default\collections.conf (1 hit) Line 1: [TA_workday_tmp_1_checkpointer] TA-workday tmp 1\default\data\ui\views\configuration.xml (1 hit) Line 2: <view template="TA-workday_tmp_1:templates/base.html" type="html" isDashboard="False"> TA-workday_tmp_1\default\data\ui\views\home.xml (2 hits) Line 2: <label>TA-Workday_tmp_1_Add-on_for_Splunk</label> Line 3: <description>This is an example view in add-on TA-Workday_tmp_1_Add-on_for_Splunk</description> TA-workday_tmp_1\default\data\ui\views\inputs.xml (1 hit) Line 2: <view template="TA-workday_tmp_1:templates/base.html" type="html" isDashboard="False"> TA-workday_tmp_1\default\inputs.conf (1 hit) Line 1: [workday_tmp_1] TA-workday_tmp_1\default\restmap.conf (10 hits) Line 2: [admin:TA_workday_tmp_1] Line 4: members = TA_workday_tmp_1_settings, TA_workday_tmp_1_workday_tmp_1 Line 6: [admin_external:TA_workday_tmp_1_settings] Line 8: handlerfile = TA_workday_tmp_1_rh_settings.py Line 12: [admin_external:TA_workday_tmp_1_workday_tmp_1] Line 14: handlerfile = TA_workday_tmp_1_rh_workday_tmp_1.py TA-workday_tmp_1\default\web.conf (8 hits) Line 2: [expose:TA_workday_tmp_1_settings] Line 3: pattern = TA_workday_tmp_1_settings Line 6: [expose:TA_workday_tmp_1_settings_specified] Line 7: pattern = TA_workday_tmp_1_settings/* Line 10: [expose:TA_workday_tmp_1_workday] Line 11: pattern = TA_workday_tmp_1_workday Line 14: [expose:TA_workday_tmp_1_workday_specified] Line 15: pattern = TA_workday_tmp_1_workday/* TA-workday_tmp_1\README\inputs.conf.spec (1 hit) Line 1: [workday_tmp_1://<name>]           but still getting:           10-11-2021 14:29:03.878 +0000 ERROR AdminManagerExternal [6624 TcpChannelThread] - Stack trace from python handler: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper for name, data, acl in meth(self, *args, **kwargs): File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/splunktaucclib/rest_handler/handler.py", line 168, in get output_mode='json', File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_fval = f(*args, **kwargs) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44           Any pointers? Thanks.
We have Cloud App Security (CAS) Alerts set to be pulled at an interval of every 5 minutes. This was the default when we set it up. What we're finding is that the same alert is being pulled every 5 m... See more...
We have Cloud App Security (CAS) Alerts set to be pulled at an interval of every 5 minutes. This was the default when we set it up. What we're finding is that the same alert is being pulled every 5 minutes. The alert ID and URL are both identical, so is all the content. Here's the inputs.conf snippet: [splunk_ta_o365_cloud_app_security://CAS_Alerts] content_type = Alerts index = o365 interval = 300 We're using the latest version of this app: https://splunkbase.splunk.com/app/4055/ From the log it does appear that the start time is incrementing, but something still isn't working as designed: 2021-10-11 11:57:29,339 level=INFO pid=30282 tid=MainThread logger=splunksdc.collector pos=collector.py:run:251 | | message="Modular input started." 2021-10-11 11:57:29,357 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2021-10-11 11:57:29,386 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_cas_token_by_psk:177 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Acquire cloud application access token success." 2021-10-11 11:57:29,653 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:_create_consumer:139 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Start retrieving Cloud Application Security messages." portal_url=b'<redacted>' portal_region=b'<redacted>' 2021-10-11 11:57:29,653 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:run:46 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Start recording Cloud Application Security messages." source=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' 2021-10-11 11:57:29,653 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get:630 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Calling Cloud Application Security API." url=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' params={'$filter': datetime.datetime(2021, 10, 11, 16, 57, 29, 653529)} 2021-10-11 11:57:50,766 level=INFO pid=30282 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited." 2021-10-11 12:02:26,129 level=INFO pid=31032 tid=MainThread logger=splunksdc.collector pos=collector.py:run:251 | | message="Modular input started." 2021-10-11 12:02:26,274 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2021-10-11 12:02:26,303 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_cas_token_by_psk:177 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Acquire cloud application access token success." 2021-10-11 12:02:26,574 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:_create_consumer:139 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Start retrieving Cloud Application Security messages." portal_url=b'<redacted>' portal_region=b'<redacted>' 2021-10-11 12:02:26,574 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:run:46 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Start recording Cloud Application Security messages." source=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' 2021-10-11 12:02:26,574 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get:630 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Calling Cloud Application Security API." url=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' params={'$filter': datetime.datetime(2021, 10, 11, 17, 2, 26, 574478)} 2021-10-11 12:02:48,135 level=INFO pid=31032 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited."
How will I set up a data model that has Authentication and sub-sessions Default, insecure and Privileged Authentication data model. It uses action of a sucess and failure.  I am using the following s... See more...
How will I set up a data model that has Authentication and sub-sessions Default, insecure and Privileged Authentication data model. It uses action of a sucess and failure.  I am using the following splunk query:     (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) NOT "pam_unix(sshd:auth): authentication failure;"   Is there anything wrong with the above query or how can I make it to work?  
I work in a very large environment with Splunk Ent. + ES. Use case are forgotten & no one has time to deal with them. So am trying to create a plan to use them to better our environment. Please advis... See more...
I work in a very large environment with Splunk Ent. + ES. Use case are forgotten & no one has time to deal with them. So am trying to create a plan to use them to better our environment. Please advise. Thx a million.
One of employee left company. Now all reports & alerts owned by him are not visible in splunk .  We have splunk 7.3.3 in our environment How can i search those alerts / reports as they are very im... See more...
One of employee left company. Now all reports & alerts owned by him are not visible in splunk .  We have splunk 7.3.3 in our environment How can i search those alerts / reports as they are very important for us as we modify those alerts / report periodically Regards, Nilesh  
Hello, I have struggled with alerting a specific search I've made.   EVENT_TYPE="Login" LOGIN_STATUS=* [search EVENT_TYPE="Login" LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT" | stats count by U... See more...
Hello, I have struggled with alerting a specific search I've made.   EVENT_TYPE="Login" LOGIN_STATUS=* [search EVENT_TYPE="Login" LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT" | stats count by USER_ID | table USER_ID] | stats latest(LOGIN_STATUS) AS LOGIN_STATUS latest(USER_NAME) AS USER_NAME latest(UserAccountId) AS "Account Id" latest(USER_TYPE) AS "User Type" latest(TIMESTAMP) AS "Time stamp" by USER_ID | where LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT" Which results in this I have tried number of results > 0 search USER_ID> 0 I tried using field tokens such as $RESULT.userid$ > 0  Does anyone know how I can edit my search or trigger to actually trigger when I receive results?
I am on the cluster master and in the bucket status tab.I see the number of "fixup tasks pending" . for everything the status is "Cannot fix search count as the bucket hasn't rolled yet" . I tried se... See more...
I am on the cluster master and in the bucket status tab.I see the number of "fixup tasks pending" . for everything the status is "Cannot fix search count as the bucket hasn't rolled yet" . I tried selecting action and clicked roll but it is not working.   Any suggestions?....Does restarting cluster master helps? .... I see the SF,RF met and data is searchable but still these bucket fixup tasks are present and the number keeps increasing
Hello. If our Enterprise Splunk license expired and we disabled our ingest, so no license violations were received, would Splunk historical data still be fully searchable, WITHOUT applying free lice... See more...
Hello. If our Enterprise Splunk license expired and we disabled our ingest, so no license violations were received, would Splunk historical data still be fully searchable, WITHOUT applying free license or applying another Enterprise license? Thanks,
Hello, regarding https://docs.splunk.com/Documentation/Splunk/8.2.2/ReleaseNotes/FixDatetimexml2020 - in case we upgrade Splunk version does /etc/datetime.xml is still less priority than pushed patc... See more...
Hello, regarding https://docs.splunk.com/Documentation/Splunk/8.2.2/ReleaseNotes/FixDatetimexml2020 - in case we upgrade Splunk version does /etc/datetime.xml is still less priority than pushed patch app on shc/idxc/UFs ? Looking at https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Wheretofindtheconfigurationfiles I may understand /etc is lowest priority? How do you handle it if you keep V7 universal forwarders and upgrade servers to 8.2.2 version? Thanks.
hi what is the rex for error number and error message of this log:  (separately) 23:43:51.411 app module: 100:  Not Found!    error number= 100 error message=  Not Found!   Thanks
Hi, I am new to dashboard studio . How to create a drill down from bar chart to a single value chart dynamically in dashboard studio.Bar chart has sourcetypes( Pass,fail,error) counts  in y axis an... See more...
Hi, I am new to dashboard studio . How to create a drill down from bar chart to a single value chart dynamically in dashboard studio.Bar chart has sourcetypes( Pass,fail,error) counts  in y axis and count in x-axis. The single value chart have to show independent counts of Pass,fail,error upon selection in bar chart. 
Hi all, I have  indexer clusters at multiple sites, but common Search Heads. I want to remove certain indexes which holds no data. How can I find in which indexer cluster's indexes.conf , the cert... See more...
Hi all, I have  indexer clusters at multiple sites, but common Search Heads. I want to remove certain indexes which holds no data. How can I find in which indexer cluster's indexes.conf , the certain indexes entries are present. Also in which deployment app of which indexer the related monitoring stanza is present. Is there any rest api command to get the details? Thanks in Advance  
Query to see which application has followed which integration method to on board the data on Splunk cloud like HEC,TCP,DB connect
I have a lookup table with a few fields (FirstSeenDate, LastSeenDate, IP, etc...). I have a search created to show me the top 10 events in the table by count. What I want to do is add a part in the s... See more...
I have a lookup table with a few fields (FirstSeenDate, LastSeenDate, IP, etc...). I have a search created to show me the top 10 events in the table by count. What I want to do is add a part in the search to filter out anything that is older than 90 days in the FirstSeenDate column. 
Using fecth in reactjs: fetch('https://[SUBDOMAIN].splunkcloud.com:8088/services/collector/event/1.0', { method: 'POST', headers: { 'Access-Control-Allow-Origin': 'application/json', '... See more...
Using fecth in reactjs: fetch('https://[SUBDOMAIN].splunkcloud.com:8088/services/collector/event/1.0', { method: 'POST', headers: { 'Access-Control-Allow-Origin': 'application/json', 'Access-Control-Allow-Credentials': 'true', 'Access-Control-Allow-Methods': 'OPTIONS, GET, POST', 'Access-Control-Allow-Headers': 'Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control', Accept: 'application/json', 'Content-Type': 'application/json', Authorization: 'Splunk [TOKEN]' }, body: JSON.stringify({ sourcetype: '_json', index: 'main', host: 'mydata2', event: { foo: 'bar3', b: ['value1_3', 'value1_4'] } }) }) .then((res) => res.json()) .then((json) => { // eslint-disable-next-line no-console console.log('json---->', json); }); Using splunk-logging in reactjs:  import * as SplunkLogging from 'splunk-logging';   const SplunkLogger = SplunkLogging.Logger; const config = { token: [TOKEN], url: '[URL]:8088' }; // eslint-disable-next-line no-var const Logger = new SplunkLogger(config); Logger.requestOptions.strictSSL = true; // eslint-disable-next-line no-var const payload = { // Message can be anything; doesn't have to be an object message: { temperature: '70F', chickenCount: 500 } }; console.info('Sending payload', payload); Logger.send(payload, function (err, resp, body) { // If successful, body will be { text: 'Success', code: 0 } console.info('Response from Splunk', body); });    
Hi, I need to call Js from <a> achor tag can someone please help.   <div id="showMoreRecords" align="right"> <a>Show More Records</a> </div>     JS file code: require([ 'underscore', 'back... See more...
Hi, I need to call Js from <a> achor tag can someone please help.   <div id="showMoreRecords" align="right"> <a>Show More Records</a> </div>     JS file code: require([ 'underscore', 'backbone', '../app/monitorPro/components/ModalView', 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/simplexml/ready!' ], function(_, Backbone, ModalView, mvc, SearchManager) { var order = mvc.Components.get("showMoreRecords"); // getting error here var tokens = mvc.Components.getInstance("submitted"); var poNumber = tokens.get("PONumber_ord"); var detailSearch = new SearchManager({ id: "detailSearch", earliest_time: "0", latest_time: "", preview: true, cache: false, search: 'index=testIndex source="testsource" PONumber=$PONumber_ord$ |table PONumber,CustomerName,RequestDate,shipperTimestamp' }, {tokens: true, tokenNamespace: "submitted"}); order.on("click", function(e) { e.preventDefault(); var modal = new ModalView({ title: poNumber, search: detailSearch }); modal.show(); }); });
Hello Splunkers, https://splunkbase.splunk.com/app/5037/ i am using this add-on to create a ticket in Jira, as an alert action. But after the set-up giving the JIRA URL and Credentials, it gives an... See more...
Hello Splunkers, https://splunkbase.splunk.com/app/5037/ i am using this add-on to create a ticket in Jira, as an alert action. But after the set-up giving the JIRA URL and Credentials, it gives an error for this query: index=_internal sourcetype=splunkd component=sendmodalert  
Hi All, Actually in our splunk environment there is no test environment prior and now its present, So i need to replicate all the alerts,dashboards and reports present in production to test enviro... See more...
Hi All, Actually in our splunk environment there is no test environment prior and now its present, So i need to replicate all the alerts,dashboards and reports present in production to test environment. I didnt have access to backend environment of Splunk. Is there any smarter way to do that instead of cloning each and every alert. Waiting for response. Thanks in Advance.