All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Connection metrics are logged by splunkd to metrics.log. To search metrics.log directly replace ... in the following search with a space-delimited list of your expected egress addresses: index=... See more...
Hi, Connection metrics are logged by splunkd to metrics.log. To search metrics.log directly replace ... in the following search with a space-delimited list of your expected egress addresses: index=_internal source=*metrics.log* host=idx-i-* group=tcpin_connections sourceIp IN (...) The same data is also logged to the _metrics metrics index: | mstats avg(spl.mlog.tcpin_connections._tcp_KBps) as KBps where index=_metrics group=tcpin_connections sourceIp IN (...) by sourceIp You can use the search/jobs endpoint to run an asynchronous or blocking request to execute one of the search above. See https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/RESTREF/RESTsearch#search.2Fjobs for more information.
Hi, Ingest actions may be the simplest solution. For each source type, e.g. kube:container:container1, create an ingest action with a "Set Index" rule and set the value to the target index. If... See more...
Hi, Ingest actions may be the simplest solution. For each source type, e.g. kube:container:container1, create an ingest action with a "Set Index" rule and set the value to the target index. If you need to route events with the same source type to different indexes, you can add a regular expression or eval-based condition to match content within the events and chain together multiple Set Index rules. More information is available at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/DataIngest#Set_index.
Hi Kelly, The following error is normal when no proxy is enabled or no proxy credentials are saved in TA-Zscaler_CIM: PersistentScript - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Zsca... See more...
Hi Kelly, The following error is normal when no proxy is enabled or no proxy credentials are saved in TA-Zscaler_CIM: PersistentScript - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Zscaler_CIM/bin/TA_Zscaler_CIM_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA-Zscaler_CIM#configs/conf-ta_zscaler_cim_settings, user=proxy. The error is likely normal in TA-sailpoint_identitynow-auditevent-add-on and TA-trendmicrocloudappsecurity for the same reason. The read timeout error in TA-trendmicrocloudappsecurity is caused by the Trend Micro /v1/siem/security_events endpoint not returning an HTTP response within 5 minutes, the default read timeout inherited by TA-trendmicrocloudappsecurity when it calls the Splunk Add-on Builder helper.send_http_request() method with timeout=None. The timeout value is not configurable, but TA-trendmicrocloudappsecurity/bin/input_module_tmcas_detection_logs.py could be modified to use a longer timeout value: response = helper.send_http_request(     url,     "GET",     parameters=params,     payload=None,     headers=headers,     cookies=None,     verify=True,     cert=None,     timeout=(None, 60),     use_proxy=use_proxy, ) However, this change should be made by Trend Micro, preferably by making the connect and read timeout values fully configurable. Explosions in splunkd.log events can often be caused by failures in modular or scripted inputs, where a script logs a message before a process fails, Splunk immediately restarts the process, and the cycle repeats ad infinitum. Your screenshots don't necessarily point to that, but you may get closer to a cause with: index=_internal source=*splunkd.log* host=*splunkdcloud* | cluster showcount=t | sort 10 - cluster_count | table cluster_count _raw If you don't see anything with a cluster_count of the expected magnitude, remove host=*splunkdcloud* from the search. Change the sort limit from 10 to 0 to show all results.
Hi Alex, Yes this issue was resolved for us with the 9.1.1 release (we originally tested with Splunk a 9.0.6 debug build that also had the fix, so 9.0.6 should also be fine). We are no longer experi... See more...
Hi Alex, Yes this issue was resolved for us with the 9.1.1 release (we originally tested with Splunk a 9.0.6 debug build that also had the fix, so 9.0.6 should also be fine). We are no longer experiencing the issue. Peter
Hypothetically, Example:Isolation:Url would have some other configuration extracting jsessionid, access_token, id_token, or password, possibly through another props stanza, e.g. [host::...] or [sourc... See more...
Hypothetically, Example:Isolation:Url would have some other configuration extracting jsessionid, access_token, id_token, or password, possibly through another props stanza, e.g. [host::...] or [source::...], matching the input.
@KR1  Please can you show the <input> block for your multiselect. You need to have a suitable <change> block to be able to set/unset the nf/sf tokens correctly.  
I was able to figure out the issue. I had to uncheck Enable indexer acknowledgement checkbox, I don't know why that caused the instance from receiving logs. I'm currently using localhost but will eve... See more...
I was able to figure out the issue. I had to uncheck Enable indexer acknowledgement checkbox, I don't know why that caused the instance from receiving logs. I'm currently using localhost but will eventually change that to our domain. Thanks
I was able to figure out the issue. I had to uncheck Enable indexer acknowledgement checkbox, I don't know why that caused the instance from receiving logs. I'm currently using localhost but will eve... See more...
I was able to figure out the issue. I had to uncheck Enable indexer acknowledgement checkbox, I don't know why that caused the instance from receiving logs. I'm currently using localhost but will eventually change that to our domain. Thanks
I was able to figure out the issue. I had to uncheck Enable indexer acknowledgement checkbox, I don't know why that caused the instance from receiving logs. I'm currently using localhost but will eve... See more...
I was able to figure out the issue. I had to uncheck Enable indexer acknowledgement checkbox, I don't know why that caused the instance from receiving logs. I'm currently using localhost but will eventually change that to our domain. Thanks
Wait a second. Where did you try to install the add-on builder? On your cloud instance? You shouldn't do that. It's supposed to be installed on your development instance of Splunk Entrerprise. There... See more...
Wait a second. Where did you try to install the add-on builder? On your cloud instance? You shouldn't do that. It's supposed to be installed on your development instance of Splunk Entrerprise. There you should build your app. This custom app when ready you should submit for vetting and install onto your cloud instance. See https://docs.splunk.com/Documentation/AddonBuilder/4.1.3/UserGuide/Installation
Not really. Maintaining anything uses time and effort. Essentially, it costs money. Even if there was a splunk powershell module 10 years ago, since then both splunk api evolved as well as powershell... See more...
Not really. Maintaining anything uses time and effort. Essentially, it costs money. Even if there was a splunk powershell module 10 years ago, since then both splunk api evolved as well as powershell did. And since windows is not really the main operating system of choice for splunk (yes, you can run splunk on windows but it has some limitations and it's usual;y better to just go with linux) there is much more demand for tools for unix-based admins and devs. Simple as that. On the other hand, you can always run python on windows and use python splunk libs.
Not shocking at all.  Businesses have to make decisions about where to focus their efforts and money and it would seem PowerShell did not make the cut.  Software on GitHub probably was not official a... See more...
Not shocking at all.  Businesses have to make decisions about where to focus their efforts and money and it would seem PowerShell did not make the cut.  Software on GitHub probably was not official and the employee who built it may have moved on to other things.
Hi, Have you tried the one I shared, if yes please share me your updated dashboard xml. It's working for me I could see the time on report.
@vijreddy30 - which role have you assigned to the User? It seems your user don't have access to that page. Usually only admin has access to that page.   I hope this helps!!! Kindly upvote if it do... See more...
@vijreddy30 - which role have you assigned to the User? It seems your user don't have access to that page. Usually only admin has access to that page.   I hope this helps!!! Kindly upvote if it does!!!
Hi Team,   Created test user and assign the viwer  role, and login to test Credentials and select the manage app setting operation  , it displayed the Splunk 404 Forbidden Error window displaye... See more...
Hi Team,   Created test user and assign the viwer  role, and login to test Credentials and select the manage app setting operation  , it displayed the Splunk 404 Forbidden Error window displayed  again click here option displayed in the window  again click and login credentials and click the manage setting working .   How to overcome the 404 Forbidden Error? please help me.     Regards, Vijay .K
Hi, We want to get data in from perception point. we havent seen any add on for it. we thought about spinning up a vm with a UF, but we would prefer to get data in via an addon, even if we have to c... See more...
Hi, We want to get data in from perception point. we havent seen any add on for it. we thought about spinning up a vm with a UF, but we would prefer to get data in via an addon, even if we have to create on ourselves.  the add on builder however is failing to install in our splunk cloud instance
Ok. We're getting somewhere Your appender should be sending the events to the listening components on the localhost. 1. Do you have a UF or a Splunk Enterprise instance on the same host? 2. Doe... See more...
Ok. We're getting somewhere Your appender should be sending the events to the listening components on the localhost. 1. Do you have a UF or a Splunk Enterprise instance on the same host? 2. Does it have an input defined on port 8088? 3. Isn't your network traffic firewalled? 4. Does your http input have TLS enabled or disabled? (your appender configuration will expect plain unencrypted HTTP).  
Hello, Just checking through if the issue was resolved or you have any further questions?
Hello @Albert_Cyber, You have used the right way of Configure -> Incident Management -> Incident Review Settings -> Incident Review - Event Attributes. Just make sure you click the save button at th... See more...
Hello @Albert_Cyber, You have used the right way of Configure -> Incident Management -> Incident Review Settings -> Incident Review - Event Attributes. Just make sure you click the save button at the very bottom (I have seen a customer who had a similar issue and all it needed was to click on the "Save" button at the very end)   If the issue is still not resolved, can you please provide below information / screenshots -   - Search results showing the field is available  - Notable configuration (AR) screenshot  - Event Attributes screenshot
Thank you very much for this suggestion.