All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We're in the middle of a micro-segmentation project and we're cataloging our Splunk resources.   This is for an on-prem deployment Splunk has a handy chart for ports but this chart does not contai... See more...
We're in the middle of a micro-segmentation project and we're cataloging our Splunk resources.   This is for an on-prem deployment Splunk has a handy chart for ports but this chart does not contain the Monitoring Console: https://docs.splunk.com/Documentation/Splunk/9.0.3/InheritedDeployment/Ports  Does anyone one know what ports are needed for the Monitoring Console, 8089 bi-diretionally to all the Splunk Servers + 9997 to the indexers +web port is what I was thinking but I couldn't find documentation to support that.   Thanks, any help is appreciated. 
Hi, I am looking for a way when a notification is triggered in Splunk to mention an employee or a group (@...) in the message in Microsoft Teams so they can get feedback. I already have the notificat... See more...
Hi, I am looking for a way when a notification is triggered in Splunk to mention an employee or a group (@...) in the message in Microsoft Teams so they can get feedback. I already have the notifications set up so that via the webhook the notifications end up in the correct Teams channels. Thanks in advance!      
Here is the query i have and need to extract the "sts:ExternalId"   requestParameters: { [-] policyDocument: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAssumeRoleForAnotherAcco... See more...
Here is the query i have and need to extract the "sts:ExternalId"   requestParameters: { [-] policyDocument: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAssumeRoleForAnotherAccount", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::384280045676:role/jenkins-node-custom-efep" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "efep" } } } ] }
I want to see the 500 error count for each Customers over time (Today/Yesterday/LastWeekOfDay) So total 3 days. Below screenshot is Kibana Chart. How can we create same kind of chart in Splunk? ... See more...
I want to see the 500 error count for each Customers over time (Today/Yesterday/LastWeekOfDay) So total 3 days. Below screenshot is Kibana Chart. How can we create same kind of chart in Splunk? I have tried below timechart query but x axis have time first instead of customerId. index="services" statusCode="500" | timechart span=1d count by customerId I have also tried with below Query But I feel Count in response in not correct. index="services" statusCode="500" | bucket _time span=day | chart count by customerId,_time | head 10 Is there a better way to do it?    
hello everyone, I have a column which contains week1 , week2 ,week3,week4,week5 and I want an input to the chart to show me the data from week1 to week3 for example or week2 to week5 how could I do... See more...
hello everyone, I have a column which contains week1 , week2 ,week3,week4,week5 and I want an input to the chart to show me the data from week1 to week3 for example or week2 to week5 how could I do that? 
Hi, I have the following joined Splunk query: index="myIndex" source="mySource1" | fields _time, _raw | rex "Naam van gebruiker: (?<USER>.+) -" | dedup USER | table USER | sort USER | join type... See more...
Hi, I have the following joined Splunk query: index="myIndex" source="mySource1" | fields _time, _raw | rex "Naam van gebruiker: (?<USER>.+) -" | dedup USER | table USER | sort USER | join type=left [ search index="myIndex" source="mySource2" "User:myUserID The user is authenticated and logged in." | stats latest(_raw) ] The results look like this: Green is myUserID. Red is some other persons user ID. Because I am using my hardcoded user ID, every person gets the "latest(_raw)" record corresponding to my user id. I want each user to get their own event. I believe this can be done if I use the USER field in the second search, but I don't know the syntax to get it to work. I tried: "User:'USER' The user is authenticated and logged in." And also "User:\USER\ The user is authenticated and logged in." But these don't work. What is the correct syntax?  
I am working on a KPI script and I need to deduplicate lines in the field  Looks like this : is there an | eval field= substr for first line of field  or some regex that can deduplicate my ... See more...
I am working on a KPI script and I need to deduplicate lines in the field  Looks like this : is there an | eval field= substr for first line of field  or some regex that can deduplicate my values. Thanks
Hello Team, i have the following problem. Inside my data i have a String like: Error in Data | 5432323 from endpoint 543336 Error in Data | 1344214 from endpoint 543446 Error in Data | 1323214... See more...
Hello Team, i have the following problem. Inside my data i have a String like: Error in Data | 5432323 from endpoint 543336 Error in Data | 1344214 from endpoint 543446 Error in Data | 1323214 from endpoint 545536 The field in Splunk is called: error_message. The Goal is to filter these events out from the search results with a lookup. So that when i dont want to see these messages in futher searches i can adapt the lookup. The idea was something like test.csv check, error_message true, Error in Data | * from endpoint * | lookup test.csv error_message output check | search check!=true I tried the things from https://community.splunk.com/t5/Splunk-Search/Can-we-use-wildcard-characters-in-a-lookup-table/td-p/94513?_ga=2.154739834.350113351.1675844344-1427000930.1666340646&_gac=1.213658144.1672302784.EAIaIQobChMIrf-SprWe_AIVp49oCR13GwTHEAAYASAAEgKCgvD_BwE&_gl=1*1ufx1dh*_ga*MTQyNzAwMDkzMC4xNjY2MzQwNjQ2*_ga_5EPM2P39FV*MTY3NTg1MDAzMy4xMTAuMS4xNjc1ODUyMzk3LjU0LjAuMA.. but this doesnt worked for me. Thank you all.  
We've integrated the Palo Alto NGFW with our Splunk. The logs are only coming from the Log type - Threat only. We're forwarding other log types as well like Traffic,URL Filtering,Data Filtering etc... See more...
We've integrated the Palo Alto NGFW with our Splunk. The logs are only coming from the Log type - Threat only. We're forwarding other log types as well like Traffic,URL Filtering,Data Filtering etc All the integration and configuration is correct. Can someone help me to get the logs from other sources as well or tell me the reason why from other sources logs not coming.
Hello Splunkers, Please if someone can help me with a Splunk query, I have a list of IPs I imported in lookup table, I want to grab the FW traffic where dest_ip in the FW logs matches my lookup l... See more...
Hello Splunkers, Please if someone can help me with a Splunk query, I have a list of IPs I imported in lookup table, I want to grab the FW traffic where dest_ip in the FW logs matches my lookup list of IPs, I'm confused what command i should use in search "inputlook" or "lookup. Moreover, I would be grateful is someone can explain me the difference beteween inputlook and lookup with an example. Thank you,   Moh
I want to create a alert that will notify if error_count is continuously increasing over time for any of the group mentioned in column In table I have used timechart which gives sum of error_count v... See more...
I want to create a alert that will notify if error_count is continuously increasing over time for any of the group mentioned in column In table I have used timechart which gives sum of error_count value for different groups over the time. I need to compare. I want query that will trigger alert when every row value is greater then its previous row for their respective column, If any column verify this condition Alert should be raised In Simple words : Alert when error_count increases with time for any group My sample query: <<BASE QUERY>> earliest=-4h@h latest=@h | timechart span=30m sum(error_count) as c by group  Result of this query is in image attached ,consider this table as sample data for Alert query
Hi Splunk community, I have a chart display the number of users in each month. There was no data coming in in October and November, and I want to show the number of September for October and Novemb... See more...
Hi Splunk community, I have a chart display the number of users in each month. There was no data coming in in October and November, and I want to show the number of September for October and November for the chart to have a continuous trend. Here's my query:   <my search> | timechart span=1mon dc(UserID) as "Number of Users"   The current chart looks like this:  
I'm having trouble getting a new deployment client to connect to the DS. I can see connectivity is established, but the client keeps logging an error:     DC:DeploymentClient ... channel=tenantSer... See more...
I'm having trouble getting a new deployment client to connect to the DS. I can see connectivity is established, but the client keeps logging an error:     DC:DeploymentClient ... channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected       Looking at the splunkd_access log on the DS I can see the handshake message being recieved with a 401 by the DS     10.X.X.2 - - ... "POST /services/broker/connect/GUID/CLIENTNAME/guff/linux-x86_64/8089/9.0.2/GUID/universale_forwarder/CLIENTNAME HTTP/1.1" 401     I have plenty of Windows machines in the environment connecting successfully to this DS (also running on Windows). But this server and a few other Linux machines are not connecting. Any advice?  
I have go through a few videos in youtube and documentation provided but I am unable to find a source with the steps to integrate the data into Splunk Observability Cloud. Most of the tutorials avail... See more...
I have go through a few videos in youtube and documentation provided but I am unable to find a source with the steps to integrate the data into Splunk Observability Cloud. Most of the tutorials available have the integration done pre-recording- and the videos are mainly to explain on the functionality of Splunk APM.  May I have a reference on how to setup the Splunk APM in Splunk Observability Cloud?
I have a field EXT-ID[48] of 18 bytes, where the first three bytes should contain an identifier as OCT, positions 8-10 will contain the value 000 to 100, and position 11 will contain values 1-3.  S... See more...
I have a field EXT-ID[48] of 18 bytes, where the first three bytes should contain an identifier as OCT, positions 8-10 will contain the value 000 to 100, and position 11 will contain values 1-3.  SPLUNK log as follows For example, I have an identifier received as OCT but position 8-10 is blank and the 11th position has value. I need a SPLUNK query where I would like to check that position 1-3 has value OCT and position 8-10 contain value 000 to 100, basically position 8-10 has a nonblank value in EXT-ID[48] EXT-ID[48] FLD[Additional Data, Priva..] FRMT[LVAR-Bin] LL[1] LEN[11] DATA[OCT 1] I have tried this query but it's not working index=au_axs_common_log source=*Visa* "EXT-ID[48] FLD[Additional Data, Priva..]" | rex field=_raw "(?s)(.*?FLD\[Additional Data, Priva.*?DATA\[(?<F48>[^\]]*).*)" | search F48="OCT%" @SPL  
We got an issue where earlier someone created input on the HF and done the data onboarding but now data stopped coming to the Splunk. but we are unable to find out which HF was used earlier to create... See more...
We got an issue where earlier someone created input on the HF and done the data onboarding but now data stopped coming to the Splunk. but we are unable to find out which HF was used earlier to create the Input. is there any way to find out the HF which was in use to send the data to the Splunk SH.  
I have a sample data in my Redis Database as below. I have created an input in there as abc_test and index is abc_test.  Observed that no data has been returned from the search que... See more...
I have a sample data in my Redis Database as below. I have created an input in there as abc_test and index is abc_test.  Observed that no data has been returned from the search query. May I get your assistance on "How to test Redis Enterprise Add-On for Splunk" please.   Thank you.  
Hallo About this post, https://community.splunk.com/t5/Building-for-the-Splunk-Platform/Impact-of-increasing-the-queue-size/m-p/630016#M10927 What's the Best Practices about managing queues ... See more...
Hallo About this post, https://community.splunk.com/t5/Building-for-the-Splunk-Platform/Impact-of-increasing-the-queue-size/m-p/630016#M10927 What's the Best Practices about managing queues size? Let's talk about servers running only Splunkd (Indexers and HFs) and 16GB of total physical memory. Thanks.
Hi all.  Through my work I'm building a little distributed test environment.  To make it extra hard on me they have setup the search head, indexer and forwarder on different v-Nets. Also, only th... See more...
Hi all.  Through my work I'm building a little distributed test environment.  To make it extra hard on me they have setup the search head, indexer and forwarder on different v-Nets. Also, only the search head has a public IP. My question then is how do I connect the indexer to the search head when the indexer does not have a public facing IP?    Hope the question makes sense. Jacob
Can Splunk observability be used on non cloud application?