All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an issue to remove the double quotes from the middle of a string. Example below  "My Name "is Ethan". Here i want to retain the quotes in the front and back , but remove the one from middle.... See more...
I have an issue to remove the double quotes from the middle of a string. Example below  "My Name "is Ethan". Here i want to retain the quotes in the front and back , but remove the one from middle. I used the SED CMD as SEDCMD-removeDoubleQuotes= s/\s"//g This perfectly worked as i got the output as expected below  "My Name is Ethan" But the problem is , sometime the last double quotes comes with a space before it . Because of that, the above SEDCMD is replacing the double quotes from the last of the string too . "My Name "is Ethan " The result is  "My Name is Ethan  Can someone help me how to handle this and specifically remove the middle double quotes ? Help is appreciated.  
We recently upgraded our Splunk from 8.1.4 to 8.2.2.2. After the upgrade the dashboard studio is working fine in Google Chrome but we see a blank page while trying to open the same in Firefox
I have created a saves search and it runs every day. I then created a report that uses this saved search. All I am doing in report is calling saved search like  this.. | savedsearch mysavedsearchnam... See more...
I have created a saves search and it runs every day. I then created a report that uses this saved search. All I am doing in report is calling saved search like  this.. | savedsearch mysavedsearchname   The problem is that when I run this report, it looks like its running the query behind the SavedSearch. I was hoping that instead of running the query, it will shows the last run results form saved search.  If this is by design then how can I get the last run results without running again. And I know that I can easily push the saved search result to csv file and then call csv in report. But I don't want to do this. 
Hey folks, I am trying to pull a result based on chart count by, I am also not sure if there is any other command which can fulfil this result. So the end result what I am looking for is : http.... See more...
Hey folks, I am trying to pull a result based on chart count by, I am also not sure if there is any other command which can fulfil this result. So the end result what I am looking for is : http.status IN (200,400,403) | chart count by path http.status path.                                    200.                      400                   403 /abc                                     10%                     30%                  60% /xyz                                      20%                    40%                  40% /home                                 35%                    35%                   30% I have checked the community answers but none of them is close to what I am looking for. if someone could just guide and help me through this, that would be really helpful.
Hi, I am using Distributed Splunk Enterprise Deployment (at Phantom end) to ingest phantom logs into splunk. CORE SIT Search Head IP is used here and it is working fine. But when we use ES SIT S... See more...
Hi, I am using Distributed Splunk Enterprise Deployment (at Phantom end) to ingest phantom logs into splunk. CORE SIT Search Head IP is used here and it is working fine. But when we use ES SIT Search Head IP, I get the error  - "Test connection failed for Phantom search on Host - xx.xx.xx.xx" Telnet connectivity is working fine for both CORE and ES search heads Why we are unable to connect with ES search head?  
After I set up the configuration and setting on the Gsuite app in Splunk. it's able to collect the different audit logs like admin/token log in Gsuite, but not login report. Any advice on it? Than... See more...
After I set up the configuration and setting on the Gsuite app in Splunk. it's able to collect the different audit logs like admin/token log in Gsuite, but not login report. Any advice on it? Thanks a lot.  
| set union [ search index=my_index | eval nums="1,2,3,4,5" | fields - _* | makemv delim="," nums | stats values(nums) as num ] [ search index=my_index | eval nums="2,3,4,5,6" | fields - _* | makemv ... See more...
| set union [ search index=my_index | eval nums="1,2,3,4,5" | fields - _* | makemv delim="," nums | stats values(nums) as num ] [ search index=my_index | eval nums="2,3,4,5,6" | fields - _* | makemv delim="," nums | stats values(nums) as num ] I would expect the result to be a single table with the values 1,2,3,4,5,6, but instead I just get the two datasets on top of each other:
Hello, We are integrating our on-prem Splunk (version 8.2.3) to retrieve messages from an Azure Event Hub. We have configured Linux server syslog to send to an Event Hub (Linux diagnostic extension ... See more...
Hello, We are integrating our on-prem Splunk (version 8.2.3) to retrieve messages from an Azure Event Hub. We have configured Linux server syslog to send to an Event Hub (Linux diagnostic extension 4.0). We installed the Splunk Add-on for Microsoft Cloud Services app (version 4.2.0), configured the Azure app account, and created the inputs to map to the Event Hub namespace/hub name/consumer group. We are seeing data arrive into Splunk, but it is arriving split across multiple events:   If I add all three events together, it is valid JSON for a single event:   {"body":{ "time" : "2021-11-30T23:30:01.0000000Z", "resourceId" : "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Compute/virtualMachines/xxx", "properties" : { "ident" : "CRON", "pid" : "177365", "Ignore" : "syslog", "Facility" : "authpriv", "Severity" : "info", "EventTime" : "2021-11-30T23:30:01+0000", "SendingHost" : "localhost", "Msg" : "pam_unix(cron:session): session closed for user root", "hostname" : "xxx", "FluentdIngestTimestamp" : "2021-11-30T23:30:01Z" }, "category" : "authpriv", "level" : "info", "operationName" : "LinuxSyslogEvent" },"x-opt-sequence-number":100,"x-opt-offset":"77128","x-opt-enqueued-time":1638315007941}   But we need such to be a single event in Splunk in order to process data effectively! Interestingly enough, we are using the Event Hub integration to also retrieve resource diagnostic logs, and we don't see the same issue! Only when using Event Hubs for Linux diagnostics. Has anyone faced this issue, or know how to correct the problem? Thanks!
My current log monitoring splunk forwarder is indexing events in group (like sometimes more than 1 events together) but I wanted to have each event (which is own datetime at the start) to be indexed ... See more...
My current log monitoring splunk forwarder is indexing events in group (like sometimes more than 1 events together) but I wanted to have each event (which is own datetime at the start) to be indexed separately. Only the starting of event is same for each line (event) and rest of the string varies. I tried configuring the props.conf file using the following formats: LINE_BREAKER = ([\r\n]+) (though its by default but seems not working as my events are separated by newline or \r in the source log file) and then I tried as below: BREAK_ONLY_BEFORE = ^\d+\s*$  Currently it is being indexed as shown below: However, I wanted to have each entry indexed as a separate event.  Entries in source file (example) 2021-Dec-01 Wed 08:50:06.914 INFO [Thread-3] - org.eclipse.jetty.server.session - {} - doStart(DefaultSessionIdManager.java:334) - DefaultSessionIdManager workerName=node0 2021-Dec-01 Wed 08:50:06.915 INFO [Thread-3] - org.eclipse.jetty.server.session - {} - doStart(DefaultSessionIdManager.java:339) - No SessionScavenger set, using defaults 2021-Dec-01 Wed 08:50:06.917 INFO [Thread-3] - org.eclipse.jetty.server.session - {} - startScavenging(HouseKeeper.java:132) - node0 Scavenging every 660000ms 2021-Dec-01 Wed 08:50:06.956 INFO [Thread-3] - org.eclipse.jetty.server.AbstractConnector - {} - doStart(AbstractConnector.java:331) - Started ServerConnector@5e283ab9{HTTP/1.1, (http/1.1)}{127.0.0.1:22113} 2021-Dec-01 Wed 08:50:06.956 INFO [Thread-3] - org.eclipse.jetty.server.Server - {} - doStart(Server.java:415) - Started @6850ms 2021-Dec-01 Wed 08:50:24.331 INFO [pool-6-thread-1] - com.automationanywhere.nodemanager.service.impl.WindowsEventServiceImpl - {} - onMachineLogon(WindowsEventServiceImpl.java:226) - Machine Logon: 1 2021-Dec-01 Wed 08:58:35.372 INFO [pool-6-thread-1] - com.automationanywhere.nodemanager.service.impl.WindowsEventServiceImpl - {} - onMachineLocked(WindowsEventServiceImpl.java:204) - Machine Locked: 1 2021-Dec-01 Wed 09:17:38.934 INFO [pool-6-thread-1] - com.automationanywhere.nodemanager.service.impl.WindowsEventServiceImpl - {} - onMachineUnlocked(WindowsEventServiceImpl.java:214) - Machine Unlocked: 1 2021-Dec-01 Wed 09:17:38.937 INFO [pool-6-thread-1] - com.automationanywhere.nodemanager.service.impl.WindowsEventServiceImpl - {} - onMachineUnlocked(WindowsEventServiceImpl.java:216) - Session id 1 removed from tracking on machine unlock. I  would appreciate any help in configuring the props.conf file to index events  as a single entry. TIA.
Im trying to get a way to have SED (via search)  append a string to the raw log in the results window if a condition is met anywhere in the raw log file - in the example below if i find any series of... See more...
Im trying to get a way to have SED (via search)  append a string to the raw log in the results window if a condition is met anywhere in the raw log file - in the example below if i find any series of six numbers index=* | rex mode=sed "s/(?<myTest>[0-9]{1,6})/\2<myTestFound>/g   What i would like is the following -and note the "<myTestFound>" at the end <MyData>"This is my raw log with 123456 present and 987654 also present</MyData><myTestFound>   But all i have been able to do so far is  <MyData>"This is my raw log with 123456<myTestFound> present and 987654<myTestFound> also present</MyData>   Can anyone give me some assistance in getting the first option going? thanks
Hello, We are including the Pod Namespace and Pod Name in the Log Source (for K8s deployments) and would like these fields (Pod Namespace and Pod Name) to be extracted. source: /var/lib/kubelet/pod... See more...
Hello, We are including the Pod Namespace and Pod Name in the Log Source (for K8s deployments) and would like these fields (Pod Namespace and Pod Name) to be extracted. source: /var/lib/kubelet/pods/*/volumes/kubernetes.io~empty-dir/$(Volume Name)/$(POD_NS)/$(POD_NAME)/*.log Most of our searches (including saved searches) will leverage both, if not atleast one of the two, fields and we were wondering if it is better (performance wise) to do the field extractions at Index Time or at Search Time. It looks like the general practice is to opt for Search Time extraction, however there are may be cases where Index time extraction is preferred. The examples for using Index time extraction mentioned here (https://docs.splunk.com/Documentation/Splunk/8.2.3/Data/Configureindex-timefieldextraction) are not very clear, it seems like the 1st example might apply to our use case and so Index time might be preferred? Thanks, Srikar
I am in the process of trying to configure a Tenant in this add-on.  Some of the required values are available in the Azure AD integration application.  There are a number of others that I have not b... See more...
I am in the process of trying to configure a Tenant in this add-on.  Some of the required values are available in the Azure AD integration application.  There are a number of others that I have not been able to find values for. The first 3 items I have values for, the last 3 I do not.  Assistance with this would be appreciated. Tenant ID is the Directory ID from Azure Active Directory. Client ID is the Application ID from the registered application within the Azure Active Directory. Client Secret is the registered application key for the corresponding application. Cloud Application Security Token is the registered application key for the corresponding tenant. Tenant Subdomain is the first component of the Cloud App Security Portal URL. For example, https://<tenant_subdomain>.<tenant_datacenter>.portal.cloudappsecurity.com. Tenant Data Center is the second component of the Cloud App Security Portal URL. For example, https://<tenant_subdomain>.<tenant_datacenter>.portal.cloudappsecurity.com.    
Hi All. Hopefully somebody has an answer to this. We are on v8.1.6 and in doing some security cleanup, I was removing some LDAP mappings that were no longer needed or didn't need to be mapped in the... See more...
Hi All. Hopefully somebody has an answer to this. We are on v8.1.6 and in doing some security cleanup, I was removing some LDAP mappings that were no longer needed or didn't need to be mapped in the first place. Here comes the fun part. There are two groups that I cannot get to stay unmapped from a couple of specific roles. The roles are splunk-system-role and another is called windows-admin that was created after setup. If I unmap one of these roles from group1, all is fine. As soon as I remove the same role from group2 and click on save, that role now shows up again for both groups.  If I delete the windows-admin role, it may seem fine, but users still show that role assigned and I can't remove it. On top of that, if I resync the LDAP, it all shows up again even though that windows-admin role doesn't exist.  It's almost as if it's being automapped but I can't find anything. I've gone so far as manually editing the authorization.conf file and removing those mappings in there, verifying it syncs across the search heads, but no dice.  In addition, there are users that have multiple roles, but are in only one of the AD groups mapped to a role, and I cannot remove the other roles, such as splunk-system-role. Or I have some with power and a custom role and I want to keep the custom role but remove power. Won't let me and they are only in the AD group mapped to the custom role.  Very strange behavior. Short from filtering out all the groups other than those I want to show up in LDAP, are there any other ideas?
I have a quick question for the community. I remember reading that AppDynamics starts rolling up and aggregating information in the SaaS environment at 4 hours and eventually the information will co... See more...
I have a quick question for the community. I remember reading that AppDynamics starts rolling up and aggregating information in the SaaS environment at 4 hours and eventually the information will completely roll of a clients SaaS environment. I can't seem to find that information anywhere now. There reason I'm asking is that we've had a rather large turn over at my company and we have a couple of directors come into our group who are going to be responsible for all things APM. I was telling them how AppD SaaS doesn't hold onto it's data indefinitely that it starts rolling up and aggregating at set intervals and eventually rolls out of our environment. I'd like to have documentation to back that statement up but can't seem to find it now. Thanks, Bill Youngman
We are seeing duplicate host for windows in the infrastructure overview.  The issue is VMware VMs that have the same host name appear as well as windows host.  Has anyone else ran into the issue usin... See more...
We are seeing duplicate host for windows in the infrastructure overview.  The issue is VMware VMs that have the same host name appear as well as windows host.  Has anyone else ran into the issue using ITSI or ITE?
Trying to find what could be the culprit of the following situation:  Splunk Cloud Client. Running Version:8.2.2107.2, but doesn't show the experience type.    From the apps page, i don't see a Do... See more...
Trying to find what could be the culprit of the following situation:  Splunk Cloud Client. Running Version:8.2.2107.2, but doesn't show the experience type.    From the apps page, i don't see a Download option on some of the apps we uploaded before. These are private apps. I have a sc_admin role.  I can't seem to find whether there is a specific missing capability? I've compared those with another splunk tenant and Download function is available for me there and I also have sc_admin role there.  Can anyone point me in the right direction?
I have a search that I'm using to generate tokens on a dashboard. It only has 1 row, so I'm using `$result.<field>$`. There are a large number of fields, and I want all of them to be tokens. Is there... See more...
I have a search that I'm using to generate tokens on a dashboard. It only has 1 row, so I'm using `$result.<field>$`. There are a large number of fields, and I want all of them to be tokens. Is there a way I can do that? Maybe with the dashboard eval thing? Using foreach perhaps?   Thanks
I am running DLTK 3.7 CPU with Kubernetes. When I run a search query, I see the following error message     unable to read JSON response from https://dltk-dev-api-splunk-apps.apps.oc47.rack8.ps2.d... See more...
I am running DLTK 3.7 CPU with Kubernetes. When I run a search query, I see the following error message     unable to read JSON response from https://dltk-dev-api-splunk-apps.apps.oc47.rack8.ps2.dcws.labs/fit. Either you have no MLTK Container running or you probably face a network or connection issue. Returned with exception (Expecting value: line 1 column 1 (char 0))     Tested with the following search queries      | inputlookup diabetes.csv | fit MLTKContainer response from * algo=binary_nn_classifier epochs=10 mode=stage into MyModel | inputlookup server_power.csv | fit MLTKContainer mode=stage algo=linear_regressor epochs=10 batch_size=32 ac_power       My DLKT container is running fine. I can access the jupyter notebook and DLKT API working fine too.     curl --insecure https://dltk-dev-api-splunk-apps.apps.oc47.rack8.ps2.dcws.labs/ {"app": "Deep Learning Toolkit for Splunk", "version": "3.5.0", "model": "no model exists"}[       Splunk Enterprise Version: 8.2.2 Build: 87344edfcdb4    
The sentinelone.py process for the applications channel input is running under a single PID for several days.  It does not appear to be respecting the checkpoint.  @aplura_llc_supp  any assistance on... See more...
The sentinelone.py process for the applications channel input is running under a single PID for several days.  It does not appear to be respecting the checkpoint.  @aplura_llc_supp  any assistance on whether this is expected behavior and if not how the issue might be resolved would be greatly appreciated. We have three Sentinel One input channels enabled (Agents, Threats, Applications).  The modular input is configured with an interval of 300 for the threats channel seems to run fairly quickly (less than 2 minutes) and does not seem to be duplicating ingest.   The modular input for the agents channel is configured with an interval of 86400 and seems to run in about 45 minutes to 1 hour but does seem to be duplicating ingest based on the following search.     index=sentinelone_index sourcetype="sentinelone:channel:agents" earliest=1 latest=now | rex mode=sed field=_raw "s/, \"modular_input_consumption_time\": \"\w{3}, \d{2} \w{3} \d{4} \d{2}:\d{2}:\d{2} (\+|\-)\d{4}\"/, \"modular_input_consumption_time\": \"\"/g" | stats count as dup_count by _raw | stats count by dup_count     The modular input for the applications channel is configured with an interval of 3600. It runs for multiple days with the same PID and seems to be duplicating ingest based on a similar search.  It seems that it may not be respecting the checkpoint. The checkpoints for all three input channels appear to be getting set correctly on the following path $SPLUNK_HOME/var/lib/splunk/modinputs/sentinelone/. /opt/splunk/var/lib/splunk/modinputs/sentinelone/usea1-014.sentinelone.net_sentinelone-input-efd4172-40fe-b76-811f-c8cdf72132e-channel-applications.json   {"next_page": "", "last_execution": "1637470721"}   The input also appears to get the checkpoint successfully   2021-11-29 07:40:21,521 log_level=INFO pid=31196 tid=MainThread file="s1_client.py" function="get_channel" line_number="373" version="sentinelone_app_for_splunk.v5.1.2.b35" action=calling_applications_channel status=start start=1637470721000 start_length=13 start_type=<class 'str'> end=1638193221000 end_length=13 end_type=<class 'str'> checkpoint=1637470721 channel=applications 2021-11-29 07:40:21,521 log_level=WARNING pid=31196 tid=MainThread file="s1_client.py" function="get_channel" line_number="365" version="sentinelone_app_for_splunk.v5.1.2.b35" action=got_checkpoint checkpoint={'next_page': '', 'last_execution': '1637470721'} channel=applications   The Input Add On for SentinelOne App For Splunk (IA-sentinelone_app_for_splunk) is installed on the heavy forwarder.  The Input Add On for SentinelOne App For Splunk (TA-sentinelone_app_for_splunk) is installed on a search head cluster and a stand alone Enterprise Security search head.  The SentinelOne App For Splunk is not currently installed. All input channels are producing events. Any ideas on how to troubleshoot and or resolve this issue would be appreciated.
Is there way for Victorops (Splunk-On call) to mention user in slack within the incident message?  I already linked slack user to victorops user by using slash command /victor-linkuser but that only... See more...
Is there way for Victorops (Splunk-On call) to mention user in slack within the incident message?  I already linked slack user to victorops user by using slash command /victor-linkuser but that only allows users to be able to take actions within the slack. Once user uses slack button to take actions, then it mentions who took the action.  So basically I am looking for a way for the incident ticket to mention slack user in Paging section, similar to user being mentioned when they take action by slack button    Thank you