All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I'm trying to configure TA-symantec_atp v1.5.0 on Splunk 8.1.6 version of splunk but nothing happens when I try to save the config in UI page.  I found below errors in "/opt/splunk/var/log/s... See more...
Hi,  I'm trying to configure TA-symantec_atp v1.5.0 on Splunk 8.1.6 version of splunk but nothing happens when I try to save the config in UI page.  I found below errors in "/opt/splunk/var/log/splunk/python.log": 2021-09-22 13:18:41,150 +0200 ERROR __init__:164 - The REST handler module "email_symantec_util" could not be found. Python files must be in $SPLUNK_HOME/etc/apps/$MY_APP/bin/ 2021-09-22 13:18:41,150 ERROR The REST handler module "email_symantec_util" could not be found. Python files must be in $SPLUNK_HOME/etc/apps/$MY_APP/bin/ 2021-09-22 13:18:41,151 +0200 ERROR __init__:165 - No module named 'rapid_diag' Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 161, in dispatch module = __import__('splunk.rest.external.%s' % parts[0], None, None, parts[0]) File "/opt/splunk/etc/apps/TA-symantec_atp/bin/email_symantec_util.py", line 6, in <module> from . import logger_manager File "/opt/splunk/etc/apps/splunk_rapid_diag/bin/logger_manager.py", line 14, in <module> from rapid_diag.util import get_splunkhome_path, get_app_conf ModuleNotFoundError: No module named 'rapid_diag' And "/opt/splunk/var/log/splunk/web_service.log": 2021-09-22 13:24:03,700 ERROR [614b1253af7ff740791c10] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js 2021-09-22 13:24:05,706 ERROR [614b1255af7ff75b72bcd0] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js 2021-09-22 13:24:07,698 ERROR [614b1257ad7ff740411e50] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js 2021-09-22 13:24:09,702 ERROR [614b1259ae7ff740791790] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js Background: I'm currently using TA-symantec_atp v1.3.0 with Splunk 7.3.2 but I want to upgrade to Splunk 8.1.X and only TA-symantec_atp v1.5.0 is compatible with 8.1.x and above (python 3) I've tried to install and configure v1.5.0 of the addon on several machines running Splunk 8.1.x but all resulted in same error described above.  Does anybody had this TA working? 
The default page needs to be changed, after login to Splunk I should be directed to all the triggered alert page.  Eg: below are the triggered alert, After login I should be able to see the aler... See more...
The default page needs to be changed, after login to Splunk I should be directed to all the triggered alert page.  Eg: below are the triggered alert, After login I should be able to see the alert which is triggered with time seviarity and other common information and after analyzing I should be able to close the triggered alert. Can any one please guide me on this ?  
I'm trying to plan an, for me, large deployment. Connecting several sites to a headquarter. Each site does have from 200 to 900 clients who need to send data to the indexer. on two locations the band... See more...
I'm trying to plan an, for me, large deployment. Connecting several sites to a headquarter. Each site does have from 200 to 900 clients who need to send data to the indexer. on two locations the bandwith is relatively poor so the idea is to set up a cluster of two indexers locally on those locations. in the headquarter we'd set up a cluster of three indexers for all other locations who don't have a cluster and of course for the local data that pours in.    location: within the location I thought of forwarders for the servers (obviously) and then a heavy/intermediate forwarder which collects all those logs and sends it to the index cluster in the headquarter. in locations who need their own cluster, we'd set up a two server cluster . headquarter:  there we set up a three server cluster and a master node. two searchheads (one for the technical part and one for enterprise security) and a deployment server to forward apps to all the connected clients in all locations.  the searchhead should send its requests not only to the headquarter cluster but to all locations. I've read that it is possible to forward the sh requests through multiple clusters. the clusters don't talk with each other so there is no data exchange between the headquarter cluster and the two other clusters.    for better understanding, since this is complex zu explain, I've generated a draw.io architecture view. my questions to the feasibility of this architecture are as follows:    - do we need a master node for each cluster or can a master node manage multiple clusters? provided that the locations with their own cluster don't sync with the headquarter and are only processing requests from the SHs  - is there a minimum bandwidth that should be provided? splunk docs says 1 Gbit/s but I think for this kind of deployment 10-50 Mbit/s are enough?  - Is it basically better to  just make a larger central cluster with more servers in the headquarter or what is better performing? my fear is to set all of this up and then fall on my face cause I didn't scale the servers correctly.  - is one deployment server for all locations and all servers feasible or should I set up an deployment server at least for the locations with their own cluster too?   all specs should be in the draw.io few. hope you can answer my questions. I know it's a lot, but I'm afraid I may scale it wrong and then, months into the setup, I notice what is wrong.          thanks a lot for your tipps and suggestions! question: on location with their own cluster, do they need their own master node?
Hello Splunkers !!   Iam getting below error on one of my HF. How can i resolve this issue or overcome with the below issue. Please provide the expert suggestion on this.   Tcpout Processor: The ... See more...
Hello Splunkers !!   Iam getting below error on one of my HF. How can i resolve this issue or overcome with the below issue. Please provide the expert suggestion on this.   Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group itops_mulesoft_prod_hf has been blocked for 100 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.   Thanks in advance.  
State Date Desc Count bc 11102021 vm 234569 bc 12102021 vm 456328 bc 11102021 vm 234569 bc 12102021 vm 4532178 cd 11102021 vm 234000 cd 12102021 vm 234000 cd 11102021 vm 234000 cd 12102021 vm... See more...
State Date Desc Count bc 11102021 vm 234569 bc 12102021 vm 456328 bc 11102021 vm 234569 bc 12102021 vm 4532178 cd 11102021 vm 234000 cd 12102021 vm 234000 cd 11102021 vm 234000 cd 12102021 vm 568902 From the stats output (such as above),I would like to first group them as per state, then compare the count[0] with that of count[1] and then count[2] with count [3] and then count [3] with count [0],if any one matches then should be displayed as result. In the above case for state=cd,index[0] and index[1] are same,so the expect result is State Date Desc Count cd 11102021 vm 234000 cd 12102021 vm 234000 Please assist
Hi, Is it possible when using Global Account to customise the fields? i.e. add other fields than only Username and Password. Ideally, I would like to be able to specify an Account Name, API Key and... See more...
Hi, Is it possible when using Global Account to customise the fields? i.e. add other fields than only Username and Password. Ideally, I would like to be able to specify an Account Name, API Key and Authorization header in the Account tab, so that when configuring each input, I can choose the Account Name from a drop-down and have it pull the API Key and Authorization Header fields. I can specify these as "Additional Parameters" for the add-on, but then I have no way of selecting these on a per-input basis when configuring each input.
hello, everyone I have a question about how to write a subquery in Splunk. for example I would like to get a list of productId that was returned, but later was not purchased again. NOT IN Subquer... See more...
hello, everyone I have a question about how to write a subquery in Splunk. for example I would like to get a list of productId that was returned, but later was not purchased again. NOT IN Subquery part. How can I accomplish this? index=main sourcetype=access_combined_wcookie action=returned NOT IN [search index=main sourcetype=access_combined_wcookie action=purchase |table clientip] | stats count, dc(productId) by clientip Thank you in advance  
Hi, I need to install the below add-on, this add-on creates indexes and required roles, we dont want the add-on to control the indexes, so indexes.conf in this add-on is taken out , and we will crea... See more...
Hi, I need to install the below add-on, this add-on creates indexes and required roles, we dont want the add-on to control the indexes, so indexes.conf in this add-on is taken out , and we will create the indexes.  We want 14 indexes to be created, but is it Ok to go with one index and have different sourcetypes? We do have other configs provided in the add-on, will there be any impact if we go with one index with multiple sourcetypes also during add-on update, will there be any impact?   https://splunkbase.splunk.com/app/4153/
Hello Splunk Gurus, I am trying to generate tabular data for the API requests. Following is the query to extract below table data The FirstComp, SecondComp and ThirdComp are fields extracted at ru... See more...
Hello Splunk Gurus, I am trying to generate tabular data for the API requests. Following is the query to extract below table data The FirstComp, SecondComp and ThirdComp are fields extracted at run time from log. index=micro host=app150*usa.com "API Timeline" | rex field=_raw "FirstCompTime:(?<FirstComp>[^\,]+)" | rex field=_raw "SecondCompTime:(?<SecondComp>[^\,]+)" | rex field=_raw "ThirdCompTime:(?<ThirdComp>[^\,]+)" | table FirstComp, SecondComp, ThirdComp FirstComp SecondComp ThirdComp 78 25 31 80 22 34 81 26 36   Now I need to calculate the 95th and 99th percentile and making sure components name appear as part of first column as shown  below- Components 95th percentile 99th percentile FirstComp 77 79 SecondComp 23 24 ThirdComp 32 35   The desired output should show 99th percentile & 95th percentile by different component id. So eventually i want to bring column headers name as part of first column's value and next two column should have respective 99th percentile & 95th percentile. Thanks in advance for your time and help. Tanzy
We have recently migrated from On-prem to Splunk cloud.  Current setup is : UFs ( several of them) --> 2x HFs --> Splunk Cloud On one of the UF hosts, my outputs.conf is currently configured to sen... See more...
We have recently migrated from On-prem to Splunk cloud.  Current setup is : UFs ( several of them) --> 2x HFs --> Splunk Cloud On one of the UF hosts, my outputs.conf is currently configured to send events to both the ON-prem Indexer ( for business reasons) and the HF. This is so that, the events are seen on both On-prem indexer and Cloud indexer for a short period of time until On-prem is retired completed. When I query on the index on  on-prem and cloud for the same time interval, I see my events breaking is interpreted differently on Cloud compared to on-prem even though both search heads have the same props.conf for my source type. For example, for a particular search string on both search heads on the same index, i get different event counts. Also, on the cloud, my events are not breaking as per the regex defined for line breaking . And some events are completely missing on Cloud even though I can see them  on on-prem search head. Props.conf snippet: DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 12 NO_BINARY_CHECK = true TIME_FORMAT = %H:%M:%S.%3N category = Custom pulldown_type = 1   Do I need to be putting a copy of props.conf on HF also? If yes, must I put every props.conf of every source type in HF? Or only under certain specific conditions?
Hi, we have two fields, latency and partition. I want to graph the latency vs. time on one of our machines for each of our partitions. So the resulting panel will have four separate line graphs. The ... See more...
Hi, we have two fields, latency and partition. I want to graph the latency vs. time on one of our machines for each of our partitions. So the resulting panel will have four separate line graphs. The log output we have is:    As you can see, there are 4 different partitions 00-BTECEU, 01-BTECEU, 02-BTECEU, 03-BTECEU. I was able to parse for the partition, but I'm having issues separating the latency for each partition.    Here's the query I tried, but the resulting table is empty:   host=irmnrjeb0227d AND latency | reverse | rex field=name "Partition\s(?&lt;pName&gt;\d\d\W\w+)" | streamstats current=f last(latency) as previouslatency by pName | rename latency as currentLatency | eval delta = currentLatency - previouslatency | timechart span=30s first(delta) as Delta by pName   Any help is appreciated, thanks!  
Hi I have field in my log that call “MobileNumber” that need to show count of MobileNumber by location on map. e.g: 00121234567  Area code:0012 Number:1234567 if area code belong to Berlin 0151,... See more...
Hi I have field in my log that call “MobileNumber” that need to show count of MobileNumber by location on map. e.g: 00121234567  Area code:0012 Number:1234567 if area code belong to Berlin 0151, 0157 or 0173. show total count of area code that belong Berlin on map. if area code belong to Wolfsburg 0361 show total count of area code that belong Wolfsburg on map FYI: Latitude, Longitude not exist in log file.   Any idea? Thanks
We're ingesting Tomcat logs, and looking for items tagged [SEVERE]. I'd like to be able to pull a report of error rate, and look for errors which are occurring at a significantly higher than average ... See more...
We're ingesting Tomcat logs, and looking for items tagged [SEVERE]. I'd like to be able to pull a report of error rate, and look for errors which are occurring at a significantly higher than average rate *for their error type*. In addition, we're getting data streams from multiple hosts, each of which are really their own instance and have their own "native" error rate.  I need the average rate of occurrence of errors over the last week, and over the last day, grouped by host and error type. Then I need to flag any error who's rate has risen by... say, 500%.  So far the best I've come up with is this:  index="tomcat" sourcetype="catalina.out_logs" SEVERE earliest=-7d latest=-1d | rex field=_raw "\[SEVERE[\s\t]+\][\s\t]+(?<err>.+)[\n\r]" | eval errhost = host + "::" + err | bucket _time span=1h | stats count as tcount by errhost | stats avg(tcount) as wk_avg by errhost | appendcols [search index="tomcat" sourcetype="catalina.out_logs" SEVERE earliest=-1d latest=now() | rex field=_raw "\[SEVERE[\s\t]+\][\s\t]+(?<err>.+)[\n\r]" | eval errhost = host + "::" + err | bucket _time span=1h | stats count as tcount by errhost | stats avg(tcount) as day_avg by errhost] | eval perc= round(((day_avg - wk_avg) * 100 / wk_avg),2) | fields + perc errhost | search perc > 500.0 | search NOT "Unable to serve form" | sort perc desc   So - I'm pulling SEVERE errors, extracting just the error text, CONCATing that to the host to get my group-by string, bucketing in 1-hour increments to get an average, then building a chart with the 7-day average and the 1-day average for each host/error pair.  Wondering if anyone else has a better way to do it? Thanks!
Hi what is the rex for mq error code  here is the log: 2021-10-08 06:52:12,785 ERROR TOPIC-00000 [Utility] MQJCA1011: Failed to allocate a JMS connection.   error code=MQJCA1011 error descripti... See more...
Hi what is the rex for mq error code  here is the log: 2021-10-08 06:52:12,785 ERROR TOPIC-00000 [Utility] MQJCA1011: Failed to allocate a JMS connection.   error code=MQJCA1011 error description=Failed to allocate a JMS connection   Thanks,
Hi, I am making a search similar to this index=abc sourcetype=xyz "searchkeyword" Search result should contain only events that match the searchkeyword like  <timestamp>| trace ......searchk... See more...
Hi, I am making a search similar to this index=abc sourcetype=xyz "searchkeyword" Search result should contain only events that match the searchkeyword like  <timestamp>| trace ......searchkeyword....... but getting events around that time along with matched events <timestamp>| trace ......someotherresult....... <timestamp>| trace ......searchkeyword....... <timestamp>| trace ......someotherresult....... <timestamp>| trace ......someotherresult.......
Workday Add-on for Splunk (https://splunkbase.splunk.com/app/3783/#/details version 1.0.2) to support Multi Tenant setting. Trying to rename TA-workday to TA-workday44 (instead of TA-workday_tmp_1) ... See more...
Workday Add-on for Splunk (https://splunkbase.splunk.com/app/3783/#/details version 1.0.2) to support Multi Tenant setting. Trying to rename TA-workday to TA-workday44 (instead of TA-workday_tmp_1) as per the steps in https://community.workday.com/comment/860853#comment-860853 (to support multi tenants collection) using following steps. Files to be updated:           TA-workday_tmp_1\app.manifest (3 hits) Line 4: "title": "Workday Add-on for Splunk (tmp_1)", Line 7: "name": "TA-workday_tmp_1", Line 18: "description": "Workday Add-on for Splunk (tmp_1)", TA-workday_tmp_1\appserver\static\js\build\globalConfig.json (4 hits) Line 3: "name": "TA-workday_tmp_1", Line 7: "restRoot": "TA_workday_tmp_1" Line 294: "name": "workday_tmp_1", Line 295: "title": "workday_tmp_1", TA-workday_tmp_1\bin\ta_workday_tmp_1_declare.py (2 hits) Line 17: ta_name = 'TA-workday_tmp_1' Line 18: ta_lib_name = 'ta_workday_tmp_1' TA-workday_tmp_1\bin\TA_workday_tmp_1_rh_settings.py (2 hits) Line 2: import ta_workday_tmp_1_declare Line 150: 'ta_workday_tmp_1_settings', TA-workday_tmp_1\bin\TA_workday_tmp_1_rh_workday_tmp_1.py (2 hits) Line 2: import ta_workday_tmp_1_declare Line 63: 'workday_tmp_1', TA-workday_tmp_1\bin\workday_tmp_1.py (6 hits) Line 1: import ta_workday_tmp_1_declare Line 14: import input_module_workday_tmp_1 as input_module Line 30: super(ModInputworkday, self).__init__("ta_workday_tmp_1", "workday_tmp_1", use_single_instance) Line 36: scheme.title = ("workday_tmp_1") Line 60: return "TA-workday_tmp_1" TA-workday44/bin (added the step) mv ta_workday to ta_workday44 TA-workday_tmp_1\default\app.conf (5 hits) Line 11: description = Workday Add-on for Splunk (tmp_1) Line 15: label = Workday Add-on for Splunk (tmp_1) Line 19: id = TA-workday_tmp_1 Line 23: reload.ta_workday_tmp_1_account = simple Line 24: reload.ta_workday_tmp_1_settings = simple TA-workday_tmp_1\default\collections.conf (1 hit) Line 1: [TA_workday_tmp_1_checkpointer] TA-workday tmp 1\default\data\ui\views\configuration.xml (1 hit) Line 2: <view template="TA-workday_tmp_1:templates/base.html" type="html" isDashboard="False"> TA-workday_tmp_1\default\data\ui\views\home.xml (2 hits) Line 2: <label>TA-Workday_tmp_1_Add-on_for_Splunk</label> Line 3: <description>This is an example view in add-on TA-Workday_tmp_1_Add-on_for_Splunk</description> TA-workday_tmp_1\default\data\ui\views\inputs.xml (1 hit) Line 2: <view template="TA-workday_tmp_1:templates/base.html" type="html" isDashboard="False"> TA-workday_tmp_1\default\inputs.conf (1 hit) Line 1: [workday_tmp_1] TA-workday_tmp_1\default\restmap.conf (10 hits) Line 2: [admin:TA_workday_tmp_1] Line 4: members = TA_workday_tmp_1_settings, TA_workday_tmp_1_workday_tmp_1 Line 6: [admin_external:TA_workday_tmp_1_settings] Line 8: handlerfile = TA_workday_tmp_1_rh_settings.py Line 12: [admin_external:TA_workday_tmp_1_workday_tmp_1] Line 14: handlerfile = TA_workday_tmp_1_rh_workday_tmp_1.py TA-workday_tmp_1\default\web.conf (8 hits) Line 2: [expose:TA_workday_tmp_1_settings] Line 3: pattern = TA_workday_tmp_1_settings Line 6: [expose:TA_workday_tmp_1_settings_specified] Line 7: pattern = TA_workday_tmp_1_settings/* Line 10: [expose:TA_workday_tmp_1_workday] Line 11: pattern = TA_workday_tmp_1_workday Line 14: [expose:TA_workday_tmp_1_workday_specified] Line 15: pattern = TA_workday_tmp_1_workday/* TA-workday_tmp_1\README\inputs.conf.spec (1 hit) Line 1: [workday_tmp_1://<name>]           but still getting:           10-11-2021 14:29:03.878 +0000 ERROR AdminManagerExternal [6624 TcpChannelThread] - Stack trace from python handler: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper for name, data, acl in meth(self, *args, **kwargs): File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/splunktaucclib/rest_handler/handler.py", line 168, in get output_mode='json', File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_fval = f(*args, **kwargs) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/TA-workday44/bin/ta_workday44           Any pointers? Thanks.
We have Cloud App Security (CAS) Alerts set to be pulled at an interval of every 5 minutes. This was the default when we set it up. What we're finding is that the same alert is being pulled every 5 m... See more...
We have Cloud App Security (CAS) Alerts set to be pulled at an interval of every 5 minutes. This was the default when we set it up. What we're finding is that the same alert is being pulled every 5 minutes. The alert ID and URL are both identical, so is all the content. Here's the inputs.conf snippet: [splunk_ta_o365_cloud_app_security://CAS_Alerts] content_type = Alerts index = o365 interval = 300 We're using the latest version of this app: https://splunkbase.splunk.com/app/4055/ From the log it does appear that the start time is incrementing, but something still isn't working as designed: 2021-10-11 11:57:29,339 level=INFO pid=30282 tid=MainThread logger=splunksdc.collector pos=collector.py:run:251 | | message="Modular input started." 2021-10-11 11:57:29,357 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2021-10-11 11:57:29,386 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_cas_token_by_psk:177 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Acquire cloud application access token success." 2021-10-11 11:57:29,653 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:_create_consumer:139 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Start retrieving Cloud Application Security messages." portal_url=b'<redacted>' portal_region=b'<redacted>' 2021-10-11 11:57:29,653 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:run:46 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Start recording Cloud Application Security messages." source=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' 2021-10-11 11:57:29,653 level=INFO pid=30282 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get:630 | datainput=b'CAS_Alerts' start_time=1633971449 | message="Calling Cloud Application Security API." url=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' params={'$filter': datetime.datetime(2021, 10, 11, 16, 57, 29, 653529)} 2021-10-11 11:57:50,766 level=INFO pid=30282 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited." 2021-10-11 12:02:26,129 level=INFO pid=31032 tid=MainThread logger=splunksdc.collector pos=collector.py:run:251 | | message="Modular input started." 2021-10-11 12:02:26,274 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2021-10-11 12:02:26,303 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_cas_token_by_psk:177 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Acquire cloud application access token success." 2021-10-11 12:02:26,574 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:_create_consumer:139 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Start retrieving Cloud Application Security messages." portal_url=b'<redacted>' portal_region=b'<redacted>' 2021-10-11 12:02:26,574 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.modinputs.cloud_app_security pos=cloud_app_security.py:run:46 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Start recording Cloud Application Security messages." source=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' 2021-10-11 12:02:26,574 level=INFO pid=31032 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get:630 | datainput=b'CAS_Alerts' start_time=1633971746 | message="Calling Cloud Application Security API." url=b'https://<redacted>.portal.cloudappsecurity.com/api/v1/alerts/' params={'$filter': datetime.datetime(2021, 10, 11, 17, 2, 26, 574478)} 2021-10-11 12:02:48,135 level=INFO pid=31032 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited."
How will I set up a data model that has Authentication and sub-sessions Default, insecure and Privileged Authentication data model. It uses action of a sucess and failure.  I am using the following s... See more...
How will I set up a data model that has Authentication and sub-sessions Default, insecure and Privileged Authentication data model. It uses action of a sucess and failure.  I am using the following splunk query:     (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) NOT "pam_unix(sshd:auth): authentication failure;"   Is there anything wrong with the above query or how can I make it to work?  
I work in a very large environment with Splunk Ent. + ES. Use case are forgotten & no one has time to deal with them. So am trying to create a plan to use them to better our environment. Please advis... See more...
I work in a very large environment with Splunk Ent. + ES. Use case are forgotten & no one has time to deal with them. So am trying to create a plan to use them to better our environment. Please advise. Thx a million.
One of employee left company. Now all reports & alerts owned by him are not visible in splunk .  We have splunk 7.3.3 in our environment How can i search those alerts / reports as they are very im... See more...
One of employee left company. Now all reports & alerts owned by him are not visible in splunk .  We have splunk 7.3.3 in our environment How can i search those alerts / reports as they are very important for us as we modify those alerts / report periodically Regards, Nilesh