All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm having trouble getting a new deployment client to connect to the DS. I can see connectivity is established, but the client keeps logging an error:     DC:DeploymentClient ... channel=tenantSer... See more...
I'm having trouble getting a new deployment client to connect to the DS. I can see connectivity is established, but the client keeps logging an error:     DC:DeploymentClient ... channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected       Looking at the splunkd_access log on the DS I can see the handshake message being recieved with a 401 by the DS     10.X.X.2 - - ... "POST /services/broker/connect/GUID/CLIENTNAME/guff/linux-x86_64/8089/9.0.2/GUID/universale_forwarder/CLIENTNAME HTTP/1.1" 401     I have plenty of Windows machines in the environment connecting successfully to this DS (also running on Windows). But this server and a few other Linux machines are not connecting. Any advice?  
I have go through a few videos in youtube and documentation provided but I am unable to find a source with the steps to integrate the data into Splunk Observability Cloud. Most of the tutorials avail... See more...
I have go through a few videos in youtube and documentation provided but I am unable to find a source with the steps to integrate the data into Splunk Observability Cloud. Most of the tutorials available have the integration done pre-recording- and the videos are mainly to explain on the functionality of Splunk APM.  May I have a reference on how to setup the Splunk APM in Splunk Observability Cloud?
I have a field EXT-ID[48] of 18 bytes, where the first three bytes should contain an identifier as OCT, positions 8-10 will contain the value 000 to 100, and position 11 will contain values 1-3.  S... See more...
I have a field EXT-ID[48] of 18 bytes, where the first three bytes should contain an identifier as OCT, positions 8-10 will contain the value 000 to 100, and position 11 will contain values 1-3.  SPLUNK log as follows For example, I have an identifier received as OCT but position 8-10 is blank and the 11th position has value. I need a SPLUNK query where I would like to check that position 1-3 has value OCT and position 8-10 contain value 000 to 100, basically position 8-10 has a nonblank value in EXT-ID[48] EXT-ID[48] FLD[Additional Data, Priva..] FRMT[LVAR-Bin] LL[1] LEN[11] DATA[OCT 1] I have tried this query but it's not working index=au_axs_common_log source=*Visa* "EXT-ID[48] FLD[Additional Data, Priva..]" | rex field=_raw "(?s)(.*?FLD\[Additional Data, Priva.*?DATA\[(?<F48>[^\]]*).*)" | search F48="OCT%" @SPL  
We got an issue where earlier someone created input on the HF and done the data onboarding but now data stopped coming to the Splunk. but we are unable to find out which HF was used earlier to create... See more...
We got an issue where earlier someone created input on the HF and done the data onboarding but now data stopped coming to the Splunk. but we are unable to find out which HF was used earlier to create the Input. is there any way to find out the HF which was in use to send the data to the Splunk SH.  
I have a sample data in my Redis Database as below. I have created an input in there as abc_test and index is abc_test.  Observed that no data has been returned from the search que... See more...
I have a sample data in my Redis Database as below. I have created an input in there as abc_test and index is abc_test.  Observed that no data has been returned from the search query. May I get your assistance on "How to test Redis Enterprise Add-On for Splunk" please.   Thank you.  
Hallo About this post, https://community.splunk.com/t5/Building-for-the-Splunk-Platform/Impact-of-increasing-the-queue-size/m-p/630016#M10927 What's the Best Practices about managing queues ... See more...
Hallo About this post, https://community.splunk.com/t5/Building-for-the-Splunk-Platform/Impact-of-increasing-the-queue-size/m-p/630016#M10927 What's the Best Practices about managing queues size? Let's talk about servers running only Splunkd (Indexers and HFs) and 16GB of total physical memory. Thanks.
Hi all.  Through my work I'm building a little distributed test environment.  To make it extra hard on me they have setup the search head, indexer and forwarder on different v-Nets. Also, only th... See more...
Hi all.  Through my work I'm building a little distributed test environment.  To make it extra hard on me they have setup the search head, indexer and forwarder on different v-Nets. Also, only the search head has a public IP. My question then is how do I connect the indexer to the search head when the indexer does not have a public facing IP?    Hope the question makes sense. Jacob
Can Splunk observability be used on non cloud application?
I want to add dropdown menu to a table value. Each value in a row should be a collapsable dropdown giving the description of the value. For example if my column entry has a value R_5, if I click on i... See more...
I want to add dropdown menu to a table value. Each value in a row should be a collapsable dropdown giving the description of the value. For example if my column entry has a value R_5, if I click on it, it should expand and show me as radius=5. I am able to do use a tooltip for this but want a dropdown instead.
Hi Community! I'm hoping someone can set my head straight.  I have two app inputs. One that I push to all *NIX servers (Splunk_TA_nix), and one additional app that I want to push to one specific se... See more...
Hi Community! I'm hoping someone can set my head straight.  I have two app inputs. One that I push to all *NIX servers (Splunk_TA_nix), and one additional app that I want to push to one specific server, serverXX (Splunk_TA_nix_serverXX_inputs). For serverXX, I want it to have an additional blacklist entry to exclude all files named /var/log/syslog/XYZ.* Splunk_TA_nix/local/inputs.conf    (other stanzas exist but have been removed for this example) [monitor:///var/log] whitelist = kern*|syslog$ blacklist=(lastlog|cron|FILES.*$) disabled = 0 index = nix sourcetype = syslog Splunk_TA_nix_serverXX_inputs/local/inputs.conf    (the app just contains this stanza) [monitor:///var/log] whitelist = kern*|syslog$ blacklist=(lastlog|cron|FILES.*$|XYZ\.) disabled = 0 index = nix sourcetype = syslog I tried this method of pushing the 2 apps to serverXX, and btool is showing that it's picking up the blacklist from the Splunk_TA_nix (not the one with the XYZ), so I guess I'm doing this all wrong! What should be the correct way to exclude XYZ files for only serverXX while deploying to all *NIX hosts?    
Hi there,   So we have one of our event logs set to archive.  But there were some files that are already there before we started ingesting this log.   So if I want to bring these logs in to splun... See more...
Hi there,   So we have one of our event logs set to archive.  But there were some files that are already there before we started ingesting this log.   So if I want to bring these logs in to splunk how do you do it?  I understand in this case UF and WI are only options.   So I did deploy the below with deployment server and restarted deploy-server.  But this log did not make it to splunk.  Any ideas what could be the problem?  or any other way I can bring in exported/archived event logs in to splunk?   [monitor://C:\windows\system32\winent\logs\Archive_log.evtx] disabled = 0 index=idx
Average response time with 10% additional buffer ( single number)
I have a simple lookup table that contains a list of IPs.  I'd like to take this list and search across all of my indexes, which don't all use the same fields for source/destination IPs.  What would ... See more...
I have a simple lookup table that contains a list of IPs.  I'd like to take this list and search across all of my indexes, which don't all use the same fields for source/destination IPs.  What would be the best/most efficient way to search all of these indexes for IP matches?
Hi, I'm having trouble seeing the "Advanced Hunting Results" Dashboard section of the "Microsoft 365 App for Splunk" app, I have the Add-on "Splunk add-On for Microsoft Security" installed but I can... See more...
Hi, I'm having trouble seeing the "Advanced Hunting Results" Dashboard section of the "Microsoft 365 App for Splunk" app, I have the Add-on "Splunk add-On for Microsoft Security" installed but I can't get the sourcetype m365:defender:incident:advanced_hunting. I already validated the permissions within the application in AAD and if they are granted, any ideas?
So I have a search I run for an alert which looks for a missing event, it's a simple tstats that shows stuff within the last 30 days I would like to compare the 90 days variant in the same search and... See more...
So I have a search I run for an alert which looks for a missing event, it's a simple tstats that shows stuff within the last 30 days I would like to compare the 90 days variant in the same search and determine the missing events.    Any ideas? 
Hello, I have the below SPL with the two mvindex functions. mvindex position '6' in the array is supposed to apply http statuses for /developers.  mvindex position '10' in the array is supposed... See more...
Hello, I have the below SPL with the two mvindex functions. mvindex position '6' in the array is supposed to apply http statuses for /developers.  mvindex position '10' in the array is supposed to apply http statuses for /apps.  Currently position 6 and 10 are crossing events. Applying to both APIs. Is there anyway I can have one mvindex apply to one command?    (index=wf_pvsi_virt OR index=wf_pvsi_tmps) (sourcetype="wf:wca:access:txt" OR sourcetype="wf:devp1:access:txt") wf_env=PROD | eval temp=split(_raw," ") | eval API=mvindex(temp,4,8) | eval http_status=mvindex(temp,6,10) | search ( "/services/protected/v1/developers" OR "/wcaapi/userReg/wgt/apps" ) | search NOT "Mozilla" | eval API = if(match(API,"/services/protected/v1/developers"), "DEVP1: Developers", API) | eval API = if(match(API,"/wcaapi/userReg/wgt/apps"), "User Registration Enhanced Login", API)  
Field = 1.123456789 Field = 14.123456 Field = 3.1234567 I need to run a query that will return the number of decimals for each record in Field. Expected Result: 9 6 7
hello all. i have a .csv report that gets generated regularly and that I'm monitoring. working fine there. trying to figure out how to display it because the data(events?) are in columns. is this pos... See more...
hello all. i have a .csv report that gets generated regularly and that I'm monitoring. working fine there. trying to figure out how to display it because the data(events?) are in columns. is this possible? example data here. Hosts server1 server2 IPLevel median median Tip1662 N/A N/A Tip1663 PASSED PASSED Tip1664 FAILED FAILED Tip1666 PASSED PASSED Tip1667 PASSED PASSED Tip1668 PASSED PASSED Tip1669 N/A N/A Tip1671 PASSED PASSED Tip1674 SKIPPED SKIPPED Tip1675 FAILED FAILED Tip1676 PASSED PASSED Tip1677 PASSED PASSED Tip1680 PASSED PASSED Tip1685 PASSED PASSED Tip1687 PASSED PASSED Tip1688 SKIPPED SKIPPED Tip1689 SKIPPED SKIPPED Tip1690 FAILED FAILED
I have an alert configured, the search finds an error in a windows event log, the alert is set up to trigger a notification email. Is there a way to have the alert run a  powershell script when the e... See more...
I have an alert configured, the search finds an error in a windows event log, the alert is set up to trigger a notification email. Is there a way to have the alert run a  powershell script when the error is found?
I am looking to create a simple pie chart that contrasts the total number of users during any give timeframe vs how many logged into a specific app. I am probably over thinking this, but what I did i... See more...
I am looking to create a simple pie chart that contrasts the total number of users during any give timeframe vs how many logged into a specific app. I am probably over thinking this, but what I did is a search for distinct_count of users during a period and then joined another search that calculates the distinct_count of users that logged into a specific app over that same period. For example:  index="okta" "outcome.result"=SUCCESS displayMessage="User single sign on to app" | stats dc(actor.alternateId) as "Total Logins" | join [ | search index="okta" "target{}.displayName"="Palo Alto Networks - Prisma Access" "outcome.result"=SUCCESS displayMessage="User single sign on to app" | stats dc(actor.alternateId) as "Total Palo Logins"] | table "Total Palo Logins" "Total Logins" Only issue is I can't get a proper pie graph of the percentage of Palo Logins vs Total Logins. Any help would be appreciated. I am sure I am missing something simple here.