All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @livehybrid , I just have another question. The certificate works and we are now doing the ingestion. Thank you for that. On the Admin guide. I have to do the following on Splunk Cloud: 1. Creat... See more...
Hi @livehybrid , I just have another question. The certificate works and we are now doing the ingestion. Thank you for that. On the Admin guide. I have to do the following on Splunk Cloud: 1. Create Index 2. Enable receiver 9997 3. Enable TCP Inputs 514 We got a blocker on TCP Inputs. Ideally should be easy as like Settings > Data Inputs > Forwarded Inputs > TCP on the HF. But our approach is on Splunk Cloud (We don't use HF on this data even if we have for others. Project decided to have a saas to saas integration for KW). Now the prompt looks like this  "You currently don't have any forwarders installed. If you've recently installed a new forwarder, click the refresh button below to reload page." Refreshing it does nothing. While I understand this on an on-prem deployment perspective. I can't fully understand the project's approach. the Admin guide provided as well is not helpful. No troubleshooting part for Splunk Cloud. How did you proceed on the ingestion piece?
Hi I am looking for a SSL Certificate check that does SNI. I've tried Certificates-Expiry, I get results but doesn't support SNI. Now I am trying SSL Certificate Lookup. The .py script seems to hav... See more...
Hi I am looking for a SSL Certificate check that does SNI. I've tried Certificates-Expiry, I get results but doesn't support SNI. Now I am trying SSL Certificate Lookup. The .py script seems to have provision for SNI but I am not getting any results nor any errors. Everything is empty. | makeresults  | eval url = "mywebsite.com" | lookup sslcert_lookup dest AS url What am I missing?   Cheers Andre
Hi  For setting a 3-month (90-day) retention policy, you'll need to add or modify the settings for the "main" index in indexes.conf. The primary setting you're looking for is frozenTimePeriodInSecs, ... See more...
Hi  For setting a 3-month (90-day) retention policy, you'll need to add or modify the settings for the "main" index in indexes.conf. The primary setting you're looking for is frozenTimePeriodInSecs, which controls how long data is kept before being frozen (and typically deleted). Update your indexes.conf file. If this is a single instance of Splunk you will want to update $SPLUNK_HOME/etc/system/local/indexes.conf  (Typically /opt/splunk/etc/system/local/indexes.conf Add or modify the [main] stanza with the appropriate retention settings: [main] frozenTimePeriodInSecs = 7776000 # 90 days (3 months) in seconds This setting will cause any data older than 90 days to be frozen and, by default, deleted (unless you've configured a custom coldToFrozenScript). You could also control retention by disk space using maxTotalDataSizeMB, which would set a maximum size for the index rather than a time-based policy. If the "main" stanza already exists in your indexes.conf, just add the frozenTimePeriodInSecs line to it. If the stanza doesn't exist, you'll need to create it. After making these changes, you'll need to restart Splunk for them to take effect: $SPLUNK_HOME/bin/splunk restart Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello, I currently deploy Splunk Enterprise and wanted to find out how to set a data retention policy for the index labelled as ‘Main’ within the index’s section in Splunk Enterprise. Since the ‘mai... See more...
Hello, I currently deploy Splunk Enterprise and wanted to find out how to set a data retention policy for the index labelled as ‘Main’ within the index’s section in Splunk Enterprise. Since the ‘main’ index is filling up taking most of the space on the SSD, I need to set the policy for any data in the ‘main’ index to auto delete every 3 months.  I have found the Indexes.conf file but under the settings for the ‘Main’ index there isn’t a line for frozen bucket duration time? Is it a case of me just adding the line for frozen bucket duration or max space?  Thankyou! 
I still am completely confused.  Why don't you forget XML and just describe the UI controls, and give some examles of what inputs you would use and how it would effect the searches produced.
This is what i am getting. It is not working for me. Any help would to appreciated Thanks
For this just add at text box to filter the multiselect inputs, and then select all of them.
Hi @nithys , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@SOARt_of_Lost Going by your profile name, would appreciate your thoughts on this question as well! TIA https://community.splunk.com/t5/Splunk-SOAR/Splunk-SOAR-access-environment-variables/td-p/74... See more...
@SOARt_of_Lost Going by your profile name, would appreciate your thoughts on this question as well! TIA https://community.splunk.com/t5/Splunk-SOAR/Splunk-SOAR-access-environment-variables/td-p/741231
We have a playbook which is making calls to SOAR REST API artifacts endpoint. We are having to pass the auth token for the REST API call in the script as plain text which isn't ideal. Given we ... See more...
We have a playbook which is making calls to SOAR REST API artifacts endpoint. We are having to pass the auth token for the REST API call in the script as plain text which isn't ideal. Given we haven't configured a vault/vault like solution (CA,Vault etc.) ,  1)We set a SOAR global environment variable and stored the value as a secret but how do we call this in our script? Have tried looking at all possible attributes in the phantom library - Documentation is next to none for this - I also tried os.environ.get but custom variables are not going to be present in it. I am able to access value of variables like NO_PROXY and it returns the respective value. Any ideas around this will help. 2)I am also trying to get the base URL for constructing the REST call  Using build_phantom_rest_url or get_base_url is  returning the URL as local address 127.0.0.1 and not our specific URL. In short, trying to access the values in the image within our custom function and haven't found a solution Making a REST API call requires auth and that option is ruled out for getting the API token. Any inputs will help. Thanks in advance.
@SOARt_of_Lost Appreciate the response. I have since figured out exactly what we want to achieve. The key to achieving it was figuring out how the value is passed to the filter. The DJANGO 'in' fi... See more...
@SOARt_of_Lost Appreciate the response. I have since figured out exactly what we want to achieve. The key to achieving it was figuring out how the value is passed to the filter. The DJANGO 'in' filter expects a comma even if just one value is found for the custom field So the python script in the custom function looks at /rest/artifacts?_filter_cef__<our_custom_field>__in="a","b","c","d"&page_size=0 for multiple values & /rest/artifacts?_filter_cef__<our_custom_field>__in="a",&page_size=0 when a single value is found. As for the filter outputs to restrict fields, we eventually achieved that in the function output. The plan was to restrict values/volume of data return but oh well, wasn't working any which way! so function output was the way to go.
Thanks @livehybrid . Looks like I am decrypting it wrong. Need to ad '' as prefix and suffix. All good now. Thank you!!!
Hi @Paaattt  Are you able to get the password from the UF App downloaded from Splunk Cloud, rather than from a running Splunk instance?  If you are trying to decrypt the value in a running instance... See more...
Hi @Paaattt  Are you able to get the password from the UF App downloaded from Splunk Cloud, rather than from a running Splunk instance?  If you are trying to decrypt the value in a running instance, does it start $7? (If so you should be able to use the show-decrypted command - but remember to quote it so it doesnt try and resolve a variable starting $) $SPLUNK_HOME/bin/splunk show-decrypted --value '<encrypted_value>' Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Classic UI does not have a provision to allow "select all choices" with one click.  But you can make an entry that has all choices.  Maintainability of such a code depends on how you populate this mu... See more...
Classic UI does not have a provision to allow "select all choices" with one click.  But you can make an entry that has all choices.  Maintainability of such a code depends on how you populate this multivalue input.  It can be very easy if the input is populated by a search.  Just use appendpipe, like this index=_internal | stats count by group | eval label = group | appendpipe [stats values(group) as group | eval label = "All"] In this example, I want an input that can select one or more values from field group.  For "normal" entries, label will be the same as group value.  But I use appendpipe to add a row with label "All". Here is a complete example for you to play with and compare with real use case: <form version="1.1" theme="dark"> <label>"Select all choices"</label> <description>https://community.splunk.com/t5/Splunk-Search/Multiselect-filter-Select-all-matches-in-classic-dashboard/m-p/741079#M240547</description> <fieldset submitButton="false"> <input type="multiselect" token="group_tok" searchWhenChanged="true"> <label>Select groups</label> <fieldForLabel>label</fieldForLabel> <fieldForValue>group</fieldForValue> <search> <query>index=_internal | stats count by group | eval label = group | appendpipe [stats values(group) as group | eval label = "All"]</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <initialValue>bucket_metrics,cachemgr_bucket,conf,deploy-connections,deploy-server,dutycycle,executor,instance,kvstore_connections,map,mpool,parallel_reduce_metric,per_host_agg_cpu,per_host_lb_cpu,per_host_msp_cpu,per_host_thruput,per_index_agg_cpu,per_index_lb_cpu,per_index_msp_cpu,per_index_thruput,per_source_agg_cpu,per_source_lb_cpu,per_source_msp_cpu,per_source_thruput,per_sourcetype_agg_cpu,per_sourcetype_lb_cpu,per_sourcetype_msp_cpu,per_sourcetype_thruput,pipeline,pipelineset,queue,realtime_search_data,regex_cache,search_concurrency,search_health_metrics,search_pool,searchscheduler,spacemgr,subtask_seconds,tailingprocessor,telemetry_metrics_buffer,thruput,tpool,uihttp,version_control</initialValue> <delimiter> </delimiter> </input> </fieldset> <row> <panel> <table> <search> <query>index=_internal group IN ($group_tok$) | stats count by group</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> If the multiple choices are static, the code will be harder to maintain because every time you add or delete an entry, you must add or delete from two places. (In this case, use of makeresults may help maintain sanity.) Hope this helps.
Do you mean to say pie chart is not showing labels on smaller slices?  This is what I get when I really press down both graph size and smallest slice. (0.41% is currently the smallest.)  Yes, outward... See more...
Do you mean to say pie chart is not showing labels on smaller slices?  This is what I get when I really press down both graph size and smallest slice. (0.41% is currently the smallest.)  Yes, outward labels for smaller slices have disappeared.  But each slice is still present; if I mouse over them, their individual labels, values, and shares still show (second screenshot)  This seems to be a sensible use of real estate.  Then, if you have many especially small slices, there will be a limitation as to whether your mouse cursor could land on a given slice.   This is my new emulation | makeresults | eval _raw = "PARAMETER VALUE ASDF 6 XCV 34 ERT 1 TDED 14 GHT 3 GHB 2 BNHJ 57 QWE 17 DDD 9 YYY 8 KLO 7 POL 2 YUO 82 TRYU 2" | multikv | fields - _* linecount | sort VALUE  
Hi @livehybrid ,   Thank you. So Kiteworks accepts the following SSL certificate SSL Password Root Certificate Intermediate Certificate So yeah I can move them to separate pem file. My remai... See more...
Hi @livehybrid ,   Thank you. So Kiteworks accepts the following SSL certificate SSL Password Root Certificate Intermediate Certificate So yeah I can move them to separate pem file. My remaining problem is the SSL Password key. Splunk told me that the passphrase is located in $SPLUNK_HOME/etc/apps/100_**/local/outputs.conf. [tcpout] sslPassword = [value] I decrypted the value using  $SPLUNK_HOME/bin/splunk show-decrypted --value '<encrypted_value>' Unfortunately it is giving me this error 139750988822336:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:crypto/evp/evp_enc.c:603: 139750988822336:error:0906A065:PEM routines:PEM_do_header:bad decrypt:crypto/pem/pem_lib.c:461: A bad decrypt. What do you think did I miss? I am doubting the ssl password. But if this is the right step I need to try again and see how it goes.
I don’t propose to do above commands as those have several really bad side effects!
Hi can you show how those files are in file system (e.g. find /…. -type f)? Of course mask real IPs, FQDNs etc. Couple of lines is enough.  You could check if your splunk user could see & read thos... See more...
Hi can you show how those files are in file system (e.g. find /…. -type f)? Of course mask real IPs, FQDNs etc. Couple of lines is enough.  You could check if your splunk user could see & read those by trying ls and cat for those as splunk user. If it cannot see those or content of those then you should use setfacl to give access to only splunk user. Never use any chmod which gives access to all users! This is actually security breach… One thing which you could try as splunk user splunk list inputstatus which shows is splunk read those files and if how much it has already reads. r. Ismo
Hi i think that this is the correct instruction for this case https://splunk.my.site.com/customer/s/article/KV-Store-Error-after-upgrading-Splunk-Enterprise The issue is with server name, not with ... See more...
Hi i think that this is the correct instruction for this case https://splunk.my.site.com/customer/s/article/KV-Store-Error-after-upgrading-Splunk-Enterprise The issue is with server name, not with CA. So try to disable sslVerifyServerName attribute like above instruction is guided. r. Ismo
I see that you are running splunk on windows? I haven’t so much experience how window’s internals works in current versions, but are you sure that splunk can use all that added memory without additi... See more...
I see that you are running splunk on windows? I haven’t so much experience how window’s internals works in current versions, but are you sure that splunk can use all that added memory without additional configuration? E.g. in Linux you must run at least disable boot-start and re-enable it again. Otherwise systemd didn’t know that splunk is allowed to use that additional memory.