All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Im trying to get a way to have SED (via search)  append a string to the raw log in the results window if a condition is met anywhere in the raw log file - in the example below if i find any series of... See more...
Im trying to get a way to have SED (via search)  append a string to the raw log in the results window if a condition is met anywhere in the raw log file - in the example below if i find any series of six numbers index=* | rex mode=sed "s/(?<myTest>[0-9]{1,6})/\2<myTestFound>/g   What i would like is the following -and note the "<myTestFound>" at the end <MyData>"This is my raw log with 123456 present and 987654 also present</MyData><myTestFound>   But all i have been able to do so far is  <MyData>"This is my raw log with 123456<myTestFound> present and 987654<myTestFound> also present</MyData>   Can anyone give me some assistance in getting the first option going? thanks
Hello, We are including the Pod Namespace and Pod Name in the Log Source (for K8s deployments) and would like these fields (Pod Namespace and Pod Name) to be extracted. source: /var/lib/kubelet/pod... See more...
Hello, We are including the Pod Namespace and Pod Name in the Log Source (for K8s deployments) and would like these fields (Pod Namespace and Pod Name) to be extracted. source: /var/lib/kubelet/pods/*/volumes/kubernetes.io~empty-dir/$(Volume Name)/$(POD_NS)/$(POD_NAME)/*.log Most of our searches (including saved searches) will leverage both, if not atleast one of the two, fields and we were wondering if it is better (performance wise) to do the field extractions at Index Time or at Search Time. It looks like the general practice is to opt for Search Time extraction, however there are may be cases where Index time extraction is preferred. The examples for using Index time extraction mentioned here (https://docs.splunk.com/Documentation/Splunk/8.2.3/Data/Configureindex-timefieldextraction) are not very clear, it seems like the 1st example might apply to our use case and so Index time might be preferred? Thanks, Srikar
I am in the process of trying to configure a Tenant in this add-on.  Some of the required values are available in the Azure AD integration application.  There are a number of others that I have not b... See more...
I am in the process of trying to configure a Tenant in this add-on.  Some of the required values are available in the Azure AD integration application.  There are a number of others that I have not been able to find values for. The first 3 items I have values for, the last 3 I do not.  Assistance with this would be appreciated. Tenant ID is the Directory ID from Azure Active Directory. Client ID is the Application ID from the registered application within the Azure Active Directory. Client Secret is the registered application key for the corresponding application. Cloud Application Security Token is the registered application key for the corresponding tenant. Tenant Subdomain is the first component of the Cloud App Security Portal URL. For example, https://<tenant_subdomain>.<tenant_datacenter>.portal.cloudappsecurity.com. Tenant Data Center is the second component of the Cloud App Security Portal URL. For example, https://<tenant_subdomain>.<tenant_datacenter>.portal.cloudappsecurity.com.    
Hi All. Hopefully somebody has an answer to this. We are on v8.1.6 and in doing some security cleanup, I was removing some LDAP mappings that were no longer needed or didn't need to be mapped in the... See more...
Hi All. Hopefully somebody has an answer to this. We are on v8.1.6 and in doing some security cleanup, I was removing some LDAP mappings that were no longer needed or didn't need to be mapped in the first place. Here comes the fun part. There are two groups that I cannot get to stay unmapped from a couple of specific roles. The roles are splunk-system-role and another is called windows-admin that was created after setup. If I unmap one of these roles from group1, all is fine. As soon as I remove the same role from group2 and click on save, that role now shows up again for both groups.  If I delete the windows-admin role, it may seem fine, but users still show that role assigned and I can't remove it. On top of that, if I resync the LDAP, it all shows up again even though that windows-admin role doesn't exist.  It's almost as if it's being automapped but I can't find anything. I've gone so far as manually editing the authorization.conf file and removing those mappings in there, verifying it syncs across the search heads, but no dice.  In addition, there are users that have multiple roles, but are in only one of the AD groups mapped to a role, and I cannot remove the other roles, such as splunk-system-role. Or I have some with power and a custom role and I want to keep the custom role but remove power. Won't let me and they are only in the AD group mapped to the custom role.  Very strange behavior. Short from filtering out all the groups other than those I want to show up in LDAP, are there any other ideas?
I have a quick question for the community. I remember reading that AppDynamics starts rolling up and aggregating information in the SaaS environment at 4 hours and eventually the information will co... See more...
I have a quick question for the community. I remember reading that AppDynamics starts rolling up and aggregating information in the SaaS environment at 4 hours and eventually the information will completely roll of a clients SaaS environment. I can't seem to find that information anywhere now. There reason I'm asking is that we've had a rather large turn over at my company and we have a couple of directors come into our group who are going to be responsible for all things APM. I was telling them how AppD SaaS doesn't hold onto it's data indefinitely that it starts rolling up and aggregating at set intervals and eventually rolls out of our environment. I'd like to have documentation to back that statement up but can't seem to find it now. Thanks, Bill Youngman
We are seeing duplicate host for windows in the infrastructure overview.  The issue is VMware VMs that have the same host name appear as well as windows host.  Has anyone else ran into the issue usin... See more...
We are seeing duplicate host for windows in the infrastructure overview.  The issue is VMware VMs that have the same host name appear as well as windows host.  Has anyone else ran into the issue using ITSI or ITE?
Trying to find what could be the culprit of the following situation:  Splunk Cloud Client. Running Version:8.2.2107.2, but doesn't show the experience type.    From the apps page, i don't see a Do... See more...
Trying to find what could be the culprit of the following situation:  Splunk Cloud Client. Running Version:8.2.2107.2, but doesn't show the experience type.    From the apps page, i don't see a Download option on some of the apps we uploaded before. These are private apps. I have a sc_admin role.  I can't seem to find whether there is a specific missing capability? I've compared those with another splunk tenant and Download function is available for me there and I also have sc_admin role there.  Can anyone point me in the right direction?
I have a search that I'm using to generate tokens on a dashboard. It only has 1 row, so I'm using `$result.<field>$`. There are a large number of fields, and I want all of them to be tokens. Is there... See more...
I have a search that I'm using to generate tokens on a dashboard. It only has 1 row, so I'm using `$result.<field>$`. There are a large number of fields, and I want all of them to be tokens. Is there a way I can do that? Maybe with the dashboard eval thing? Using foreach perhaps?   Thanks
I am running DLTK 3.7 CPU with Kubernetes. When I run a search query, I see the following error message     unable to read JSON response from https://dltk-dev-api-splunk-apps.apps.oc47.rack8.ps2.d... See more...
I am running DLTK 3.7 CPU with Kubernetes. When I run a search query, I see the following error message     unable to read JSON response from https://dltk-dev-api-splunk-apps.apps.oc47.rack8.ps2.dcws.labs/fit. Either you have no MLTK Container running or you probably face a network or connection issue. Returned with exception (Expecting value: line 1 column 1 (char 0))     Tested with the following search queries      | inputlookup diabetes.csv | fit MLTKContainer response from * algo=binary_nn_classifier epochs=10 mode=stage into MyModel | inputlookup server_power.csv | fit MLTKContainer mode=stage algo=linear_regressor epochs=10 batch_size=32 ac_power       My DLKT container is running fine. I can access the jupyter notebook and DLKT API working fine too.     curl --insecure https://dltk-dev-api-splunk-apps.apps.oc47.rack8.ps2.dcws.labs/ {"app": "Deep Learning Toolkit for Splunk", "version": "3.5.0", "model": "no model exists"}[       Splunk Enterprise Version: 8.2.2 Build: 87344edfcdb4    
The sentinelone.py process for the applications channel input is running under a single PID for several days.  It does not appear to be respecting the checkpoint.  @aplura_llc_supp  any assistance on... See more...
The sentinelone.py process for the applications channel input is running under a single PID for several days.  It does not appear to be respecting the checkpoint.  @aplura_llc_supp  any assistance on whether this is expected behavior and if not how the issue might be resolved would be greatly appreciated. We have three Sentinel One input channels enabled (Agents, Threats, Applications).  The modular input is configured with an interval of 300 for the threats channel seems to run fairly quickly (less than 2 minutes) and does not seem to be duplicating ingest.   The modular input for the agents channel is configured with an interval of 86400 and seems to run in about 45 minutes to 1 hour but does seem to be duplicating ingest based on the following search.     index=sentinelone_index sourcetype="sentinelone:channel:agents" earliest=1 latest=now | rex mode=sed field=_raw "s/, \"modular_input_consumption_time\": \"\w{3}, \d{2} \w{3} \d{4} \d{2}:\d{2}:\d{2} (\+|\-)\d{4}\"/, \"modular_input_consumption_time\": \"\"/g" | stats count as dup_count by _raw | stats count by dup_count     The modular input for the applications channel is configured with an interval of 3600. It runs for multiple days with the same PID and seems to be duplicating ingest based on a similar search.  It seems that it may not be respecting the checkpoint. The checkpoints for all three input channels appear to be getting set correctly on the following path $SPLUNK_HOME/var/lib/splunk/modinputs/sentinelone/. /opt/splunk/var/lib/splunk/modinputs/sentinelone/usea1-014.sentinelone.net_sentinelone-input-efd4172-40fe-b76-811f-c8cdf72132e-channel-applications.json   {"next_page": "", "last_execution": "1637470721"}   The input also appears to get the checkpoint successfully   2021-11-29 07:40:21,521 log_level=INFO pid=31196 tid=MainThread file="s1_client.py" function="get_channel" line_number="373" version="sentinelone_app_for_splunk.v5.1.2.b35" action=calling_applications_channel status=start start=1637470721000 start_length=13 start_type=<class 'str'> end=1638193221000 end_length=13 end_type=<class 'str'> checkpoint=1637470721 channel=applications 2021-11-29 07:40:21,521 log_level=WARNING pid=31196 tid=MainThread file="s1_client.py" function="get_channel" line_number="365" version="sentinelone_app_for_splunk.v5.1.2.b35" action=got_checkpoint checkpoint={'next_page': '', 'last_execution': '1637470721'} channel=applications   The Input Add On for SentinelOne App For Splunk (IA-sentinelone_app_for_splunk) is installed on the heavy forwarder.  The Input Add On for SentinelOne App For Splunk (TA-sentinelone_app_for_splunk) is installed on a search head cluster and a stand alone Enterprise Security search head.  The SentinelOne App For Splunk is not currently installed. All input channels are producing events. Any ideas on how to troubleshoot and or resolve this issue would be appreciated.
Is there way for Victorops (Splunk-On call) to mention user in slack within the incident message?  I already linked slack user to victorops user by using slash command /victor-linkuser but that only... See more...
Is there way for Victorops (Splunk-On call) to mention user in slack within the incident message?  I already linked slack user to victorops user by using slash command /victor-linkuser but that only allows users to be able to take actions within the slack. Once user uses slack button to take actions, then it mentions who took the action.  So basically I am looking for a way for the incident ticket to mention slack user in Paging section, similar to user being mentioned when they take action by slack button    Thank you
Hello community, I apologize in advance, my English being bad, Google Translate is my friend. My business is starting up on Splunk Enterprise and I am having a problem with a search that is probabl... See more...
Hello community, I apologize in advance, my English being bad, Google Translate is my friend. My business is starting up on Splunk Enterprise and I am having a problem with a search that is probably simple but which has blocked me for a few days. I will explain the context to you: One of our tools sends supervision alerts to Enterprise with a code concerning its status (0: OK, 1: Warning, 2: Critical and 3: Unknown). The goal for me is to send these alerts to Splunk OnCall to share these alerts with other tools connected to OnCall. No worries for sending to OnCall but I am blocking the return to OK of my alerts. Here is the query that is sending the alerts currently:     index = events_hp | search state = 2 OR state = 3 | fields hostname service_description output     However, when an alert returns to OK, I cannot send the info to OnCall to close the alert there. I should be able to say in my search to add state OK (state = 0) but only when the previous state was 2 or 3. Basically, I should be able to send an alert when the state is OK (1) but only if before this OK, it was in 2 or 3. Do you have any idea how I could do this? Regards, Rajaion
Hi, I am looking to get the thresholds which are out of the box is there any way I can pull the same from the controller. Microservices/Docker/Kubernetes and also the WEB
need help on eval function of trimming the month  EX : April = APR  all months first 3 letters  thanks   
Hi Last week one of our vulnerability scan found out that our universal forwarders were suspectable to TLS CRIME vulnerability. To fix this vulnerability we updated our server configuration file (to... See more...
Hi Last week one of our vulnerability scan found out that our universal forwarders were suspectable to TLS CRIME vulnerability. To fix this vulnerability we updated our server configuration file (toggled allowSslCompression from True to False). Now we want to update server configuration file on all the servers but we found that server configuration is system specific and we can’t just replace it on every server. We are not using deployment server in our environment. Is there any other way wherein we can go and append server config file? Thank you
Hi Everyone, I've heard many times that it is challenging to get ITSI entities list with a proper alias and informational fields mapping in ITSI. A circulating SPL query is mixing types of the field... See more...
Hi Everyone, I've heard many times that it is challenging to get ITSI entities list with a proper alias and informational fields mapping in ITSI. A circulating SPL query is mixing types of the fields and does not fully solves the issue. Please check out this SPL script I have created for getting the right dataset.         | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), alias_fields=spath(value,"identifier.fields{}"), entity_id=spath(value, "_key"), entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name") | appendpipe [ | mvexpand alias_fields | eval field_value = spath(value,alias_fields."{}"), field_type="alias" | rename alias_fields as field_name ] | appendpipe [ | where isnull(field_type) | mvexpand info_fields | eval field_value = spath(value,info_fields."{}"), field_type="info" | rename info_fields as field_name ] | where isnotnull(field_type) | table entity_id entity_name entity_title field_name field_value field_type        
Hi I try to use this APM agent https://github.com/TeaTips/SplunkJavaAgent but when I run return this error: [root@myserver opt]# ./splunkagent.jar ./splunkagent.jar: line 1: $'PK\003\004': comman... See more...
Hi I try to use this APM agent https://github.com/TeaTips/SplunkJavaAgent but when I run return this error: [root@myserver opt]# ./splunkagent.jar ./splunkagent.jar: line 1: $'PK\003\004': command not found ./splunkagent.jar: line 2: $'\b\272\265\211A': command not found ./splunkagent.jar: line 3▒▒▒A▒3▒▒META-INF/MANIFEST.MFMʱ: No such file or directory ./splunkagent.jar: line 4: syntax error near unexpected token `)' ./splunkagent.jar: line 4: `▒0▒▒=▒wȨC.▒U)▒b'▒▒ ▒ʵ^▒▒ކ$'   any idea? Thanks,
Hi Splunkers, I'm in trouble with a correlation rule creation. The purposes of the rule is the following one: if a User Group related to a Databases is changed by a remote user, the rule must trigg... See more...
Hi Splunkers, I'm in trouble with a correlation rule creation. The purposes of the rule is the following one: if a User Group related to a Databases is changed by a remote user, the rule must trigger. Here, some addictional data; For change we mean that a user is added or removed to a group and/or an entire Database user group is deleted. We are not focusing on a specific data base product; the fact that the host involved is a database is get by checking 2 lookup table where the database IP and ports are putted. If possible, we had to use Datamodel instead of sourcetype; for this, I checked both Database and Authentication data model but I see that, in the ready field, we are able to extract only data about users, not groups. Any Idea?
Hi, I recently ran into a problem where playbook runs a workflow for a long time (usually hours) without stopping itself. The debug logs didn’t not show much information besides the event was hanging... See more...
Hi, I recently ran into a problem where playbook runs a workflow for a long time (usually hours) without stopping itself. The debug logs didn’t not show much information besides the event was hanging. Is there something we could set up to kill the workflow if it’s running for x amount of time?
Hi ,    i am trying to increase the height of sunburst chart height with below code.  <option name="sunburst_viz.sunburst_viz.height">850</option> but its not working.  could you please help.