All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Check your config with btool. It's relatively easy to mistype pass4SymmKey (the name of the option) making it effectively not set.
What does he mean by @marnall  This isn't ideal becuase if you update the TA from Splunkbase in the future you will lose your changes What changes?
What does he mean by This isn't ideal becuase if you update the TA from Splunkbase in the future you will lose your changes What changes?
For issue 1 I have also had this problem, where the subscription just stops working and does not auto-correct. There is a lookup in the Splunk Enterprise instance which contains subscription informa... See more...
For issue 1 I have also had this problem, where the subscription just stops working and does not auto-correct. There is a lookup in the Splunk Enterprise instance which contains subscription information. You can make a scheduled search to overwrite this lookup, and then the app will make a new subscription and the logs should come in again. 
It sounds like your bigger problem is that Splunk thinks your pass4SymmKey is empty or set to the default value. Could you try moving the pass4SymmKey to the [clustering] stanza next to the pass4Sym... See more...
It sounds like your bigger problem is that Splunk thinks your pass4SymmKey is empty or set to the default value. Could you try moving the pass4SymmKey to the [clustering] stanza next to the pass4SymmKey_minLength field?
Any suggestions would be appreciated. In the first row I would like to show only the first 34 characters and in the second row the first list 39 characters. I figured out how to only show a ... See more...
Any suggestions would be appreciated. In the first row I would like to show only the first 34 characters and in the second row the first list 39 characters. I figured out how to only show a certain number of characters for all rows, but not individual rows. | eval msgTxt=substr(msgTxt, 1, 49) | stats count by msgTxt
Recently, Enterprise Security allowed for event timestamps to be index time instead of event time. I was excited about this since it would alleviate some issues related to log ingestion delays and ou... See more...
Recently, Enterprise Security allowed for event timestamps to be index time instead of event time. I was excited about this since it would alleviate some issues related to log ingestion delays and outages. However, it appears there are some limitations which I have questions about. From the previously linked docs: Selecting Index time as the time range for a correlation search might impact the performance of the search. What is the nature of the impact, specifically? Select Index time to run a correlation search only on raw events that do not use accelerated data model fields or the tstats command in the search. Otherwise, the UI might display errors. You can update the correlation search so that it does not include any tstats commands to avoid these errors. So there is just no option to use index time with accelerated data models? Will this feature be added in the future? Drill down searches for notables might get modified when using Index time. Modified in what way? Index time filters are added after the first " | " pipe character in a search string. Index time filters do not have any effect on accelerated datamodels, stats, streaming, or lookup commands. So, custom drilldown searches must be constructed correctly when using Index time. What are index time filters? What is the correct way to construct custom drilldowns when using index time? Index time might not apply correctly to the Contributing Events search for risk notables. How might it not apply correctly? The Index time time range might not be applied correctly to the original correlation search with datamodels, stats, streaming, or lookup commands at the end of the search since the index time range is applied after the "savedseach" construct. Therefore, you must adjust the time range manually for the search. How might it not apply correctly? Is there a specific example? When you select Index time to run the search, all the underlying searches are run using the '''All Time''' time range picker, which might impact the search performance. This includes the correlation search as well as the drill-down search of the notable adaptive response action. Additionally, the drill down search for the notable event in Incident Review also uses index time. Am I understanding that first sentence correctly? What possible reason could there be to run the underlying search over "All Time"? In that case, what purpose does the alert time range serve? This seems like a massive caveat that makes index time practically unusable.  Index time seemed super promising, but the fact that you can't use it with accelerated data models, that it searches over all time, and that it could modify drilldowns in mysterious and unknown ways makes me wonder what use it actually serves? These seem like major issues, but I wanted to make sure I wasn't misunderstanding something. 
By default, a text value in a column in a table in Splunk Dashboard Studio will be wrapped. You can adjust the column size by clicking and dragging the vertical divider space between the values. To ... See more...
By default, a text value in a column in a table in Splunk Dashboard Studio will be wrapped. You can adjust the column size by clicking and dragging the vertical divider space between the values. To change the row size, you need to change the size of the value, whether by adjusting the font or the content of the row itself.
Great! what is the datamodel summarization?
Yeah, unfortunately you have to make edits to the app code to create the "Description" input.  See chrisyounger's solution here: https://community.splunk.com/t5/Archive/In-the-Splunk-Add-on-for-Serv... See more...
Yeah, unfortunately you have to make edits to the app code to create the "Description" input.  See chrisyounger's solution here: https://community.splunk.com/t5/Archive/In-the-Splunk-Add-on-for-ServiceNow-how-do-set-extra-custom/td-p/383045 
Version 9.0.9 of the Splunk Forwarder does contain Openssl 1.0.2zj. Is this version of Openssl vulnerable to CVE-2024-5535? I could not find a direct confirmation. Latest third-party security update... See more...
Version 9.0.9 of the Splunk Forwarder does contain Openssl 1.0.2zj. Is this version of Openssl vulnerable to CVE-2024-5535? I could not find a direct confirmation. Latest third-party security update involving openssl: https://advisory.splunk.com/advisories/SVD-2024-0304 As the latest advisory does not include openssl (https://advisory.splunk.com/advisories/SVD-2024-0718) , it may be best to wait for the next patch.
Are you wanting to get your Github logs into Splunk? I've gotten the log integration working for Github through HEC but have not directly set up webhooks in Github.
Update @ITWhisperer  you got me in the right direction. I was able to find the following article: https://docs.splunk.com/Documentation/ITSI/4.19.0/Configure/CustomRoles and was able to reso... See more...
Update @ITWhisperer  you got me in the right direction. I was able to find the following article: https://docs.splunk.com/Documentation/ITSI/4.19.0/Configure/CustomRoles and was able to resolve the issue by including the new custom role under KV store collections: itsi_services itsi_teams By using the following the steps: Step 4: Assign the role KV store collection level access The SA-ITOA file includes default entries in metadata/default.meta that determine access to KV store collections for ITSI roles. For a list of default permissions to KV store collections for ITSI roles, see KV store collection permissions in ITSI. By default, only the itoa_admin role has read/write/delete access to all ITSI KV store collections. Set permissions to KV store collections in Splunk Web In Splunk Web, go to Settings > All configurations. Set the App to IT Service Intelligence (itsi). Set the Owner to Any. Make sure Visible in the App is selected. Filter by collections-conf to only display KV store collections. For a specific view, click Permissions in the Sharing column. Check the boxes to grant read and write permissions to the various collections for ITSI roles. Click Save. This action updates KV store access permissions for the specific ITSI roles in $SPLUNK_HOME/etc/apps/SA-ITOA/metadata/local.meta. Set permissions to KV store collections from the command line Create a local.meta file in the SA-ITOA/metadata/ directory. cd $SPLUNK_HOME/etc/apps/SA-ITOA/metadata cp default.meta local.meta Edit SA-ITOA/metadata/local.meta . Set access for specific roles in local.meta. For example: [collections/itsi_services] access = read : [ itoa_admin, itoa_analyst, itoa_user ], write: [ itoa_admin ]
Assuming that each event has one of those "GetRisk completed..." lines, you could use this regex and where combination: index = yourindex <other filters like sourcetype, etc> | rex field=_raw "GetR... See more...
Assuming that each event has one of those "GetRisk completed..." lines, you could use this regex and where combination: index = yourindex <other filters like sourcetype, etc> | rex field=_raw "GetRisk completed in (?<ms>\d+) ms" | where ms > 1900
I wanted to get some clarification on how trigger conditions effect notable response actions for correlation searches in Enterprise Security. The trigger condition options are between "Once" and "For... See more...
I wanted to get some clarification on how trigger conditions effect notable response actions for correlation searches in Enterprise Security. The trigger condition options are between "Once" and "For each Result", and I believe I understand the difference. However, under them there is a little blurb that says "Notable response actions and risk response actions are always triggered for each result." To me, this essentially nullifies "Once" since the action will be triggered for each result. As a result, I fail to see how "Once" is any different than "For each Result". But surely they can't be the same. 
And what have you tried so far? And what fields you have parsed out from those events?
Hello, I'm trying to only capture and show only the time it took for the service to complete. Shown below, is is a record that says the service completed in 1901 ms.  If you could please help write... See more...
Hello, I'm trying to only capture and show only the time it took for the service to complete. Shown below, is is a record that says the service completed in 1901 ms.  If you could please help write a search query to identify and return records into my dashboard panel that exceed 1909 ms? So, for example, if there are 10 records that exceed 1900 ms, it will look something like this: GetRisk completed in 1909 ms GetRisk completed in 1919 ms GetRisk completed in 2001 ms GetRisk completed in 2100 ms As so on..... msgTxt returns: VeriskService - GetRisk completed in 1909 ms. (request details: environment: Production | desired services: BusinessOwnersTerritory | property type: Commercial xxxxx) Thank you
Hi, I hope someone can help me, In my case the lookup has a CIDR definition, but the lookup is not matching and I know there is a least one match   this is my line: | lookup file.csv ne... See more...
Hi, I hope someone can help me, In my case the lookup has a CIDR definition, but the lookup is not matching and I know there is a least one match   this is my line: | lookup file.csv network AS ip OUTPUT network AS sub_xarxa thanks in advance
Try this https://regex101.com/r/rlI3Xl/2 | rex field=source_hostname "(?i)^AZ(?<cap1>[A-Z0-9-]+?)(?=\1[A-Z0-9]{6})(?<temp_hostname4>\1[A-Z0-9]{6})-\d{10}-VMSS$"
Hello, can't load Settings page: "Something went wrong!" Configuration page failed to load (ERR0002) Splunk Enterprise 9.1.1 (clustered) / standalone Splunk 9.0.4 Addon version 2.3.2 or 3.3 (same ... See more...
Hello, can't load Settings page: "Something went wrong!" Configuration page failed to load (ERR0002) Splunk Enterprise 9.1.1 (clustered) / standalone Splunk 9.0.4 Addon version 2.3.2 or 3.3 (same error)   Log: 07-11-2024 11:05:34.421 +0200 ERROR AdminManagerExternal [25211 TcpChannelThread] - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/splunktaucclib/rest_handler/handler.py", line 338, in _format_all_response\n self._encrypt_raw_credentials(cont["entry"])\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/splunktaucclib/rest_handler/handler.py", line 368, in _encrypt_raw_credentials\n change_list = rest_credentials.decrypt_all(data)\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/splunktaucclib/rest_handler/credentials.py", line 289, in decrypt_all\n all_passwords = credential_manager._get_all_passwords()\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/solnlib/utils.py", line 153, in wrapper\n return func(*args, **kwargs)\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/solnlib/credentials.py", line 341, in _get_all_passwords\n return self._get_clear_passwords(passwords)\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/solnlib/credentials.py", line 324, in _get_clear_passwords\n clear_password += field_clear[index]\nTypeError: can only concatenate str (not "NoneType") to str\n". See splunkd.log/python.log for more details.   Thanks for your help.