All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to receive Keycloak logs in the Splunk Cloud platform. I found Keycloak apps in Splunkbase, but they seem to be unavailable in Splunk Cloud. Are there any methods to receive Keycloak logs in S... See more...
I want to receive Keycloak logs in the Splunk Cloud platform. I found Keycloak apps in Splunkbase, but they seem to be unavailable in Splunk Cloud. Are there any methods to receive Keycloak logs in Splunk Cloud?
Hello, Is it possible to create HEC Token from the CLI  of Linux host? Any recommendations how to create HEC token from CLI would be greatly appreciated. Thank you! 
I've been asked to generate an uptime report for Splunk.  I don't see anything obvious in the monitoring console, so I thought I'd try to see if I could build a simple dashboard.  Does the monitoring... See more...
I've been asked to generate an uptime report for Splunk.  I don't see anything obvious in the monitoring console, so I thought I'd try to see if I could build a simple dashboard.  Does the monitoring console log things like 
I am trying to ingest Proofpoint TAP logs to our Splunk enviornment and noticed that our Proofpoint TAP app is showing the Dashboards for the Cisco FMC app for some reason. I thought maybe I could re... See more...
I am trying to ingest Proofpoint TAP logs to our Splunk enviornment and noticed that our Proofpoint TAP app is showing the Dashboards for the Cisco FMC app for some reason. I thought maybe I could resolve it by deleting the app and reinstalling it but even after doing that it is still showing the FMC app. Has anyone seen this before? I tried looking for other posts with this issue but my search is coming up short.
Hello, I'm attempting to display a group of logs by the tranId. We log multiple user actions under a single tranId.  I'm attempting to group all of the logs for a single tranId in my dashboard. I... See more...
Hello, I'm attempting to display a group of logs by the tranId. We log multiple user actions under a single tranId.  I'm attempting to group all of the logs for a single tranId in my dashboard. I think I figured out how I want to display the logs, but I can't get the datetime format to correctly display. index blah blah | eval msgTxt=substr(msgTxt, 1, 141) | stats list(_time) as DateTime list(msgTxt) as Message list(polNbr) as QuoteId by tranId | eval time=strftime(_time," %m-%d-%Y %I:%M:%S %p") | streamstats count as log by tranId | eval tranId=if(log=1,tranId,"") | fields - log   Please help with displaying date and time format. Thanks 
My Splunk Search is as follows index="someindex" cf_space_name="somespace" msg.severity="*" | rex field=msg.message ".*METHOD:(?<method>.*),\sREQUEST_URI>.*),\sRESPONSE_CODE:(?<responseCode>.*),\sRE... See more...
My Splunk Search is as follows index="someindex" cf_space_name="somespace" msg.severity="*" | rex field=msg.message ".*METHOD:(?<method>.*),\sREQUEST_URI>.*),\sRESPONSE_CODE:(?<responseCode>.*),\sRESPONSE_TIME:(?<responseTime>.*)\sms" | stats count by msg.service,method, requestURI, responseCode | sort -count Result Table   msg.service method requestURI responseCode Count serviceA GET /v1/service/a 200 327 serviceB POST /v1/service/b 200 164 serviceA POST /v1/service/a 200  91   Under Visualization, I am trying to change this as a bar chart. I am getting all four fields on the x-axis. msg.service is mapped with count, and responseCode is mapped with responseCode. The other 2 fields are not visible since they are non-numeric fields.  if I remove fields using the following I get the proper chart (just msg.service mapped with count) my query | fields -responseCode, method, reqeustURI But I need something like this on the x and y axis x axis y axis serviceA GET v1/service/a 200 327 serviceB POST /v1/service/b 200 164 serviceA POST/v1/service/a 200  91   How to achieve this?  
Hello,  I want to initialize a token with the week number value of today. According to the documentation, https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables, the variable to us... See more...
Hello,  I want to initialize a token with the week number value of today. According to the documentation, https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables, the variable to use to get the week of the year (1 to 52) is %V. This works on any search query, but this is not working when used in a <init> tag of a dashboard. This is my <init>: <form version="1.1" theme="dark"> <init> <eval token="todayYear">strftime(now(), "%Y")</eval> <eval token="todayMonth">strftime(now(), "%m")</eval> <eval token="todayWeek">strftime(now(), "%V")</eval> <eval token="yearToken">strftime(now(), "%Y")</eval> <eval token="monthToken">strftime(now(), "%m")</eval> </init> ... All these tokens are well initialized except to todayWeek, which refers to %V variable, which take no value.  What am I doing wrong?
Hi,  i'm trying to learn how appendpipe works, to do that i've tried to do this dummy search, and i don't understand why appendpipe returns the highlighted row.    
Greetings, I found some useful savedsearches under SA-AccessProtection / DA-ESS-AccessProtection, which I am interested in using. However, I'd like to understand these use-cases before making them l... See more...
Greetings, I found some useful savedsearches under SA-AccessProtection / DA-ESS-AccessProtection, which I am interested in using. However, I'd like to understand these use-cases before making them live.   Are these apps and their content documented somewhere? So far, I have not had any luck.   Thanks!
I have a dashboard that a specific team uses. Today, they asked about why one of the panels was broken. Looking into it, we were receiving this error from the search:     Error in 'fit' command: E... See more...
I have a dashboard that a specific team uses. Today, they asked about why one of the panels was broken. Looking into it, we were receiving this error from the search:     Error in 'fit' command: Error while fitting "StateSpaceForecast" model: timestamps not continuous: at least 33 missing rows, the earliest between "2024-01-20 07:00:00" and "2024-01-20 09:00:00", the latest between "2024-10-02 06:00:00" and "2024-10-02 06:00:01"     That seemed pretty straight forward, I thought we might be missing some timestamp values. This is the query we are running:     |inputlookup gslb_query_last505h.csv | fit StateSpaceForecast "numRequests" holdback=24 forecast_k=48 conf_interval=90 output_metadata=true period=120     Looking into the CSV file itself, I went to look for missing values under the numRequests column. We have values for each hour going back for almost a year. The timestamps mentioned in the error look like: Looking at that SS now, There is an hour missing there. The timestamp for 08:00. That may be the cause. How would I go about efficiently finding the 33 missing values? Each value missing would be in-between any two hours. Will I have to go through and find skipped hours among 8k results in the CSV file?    Thanks for any help. 
Is there any guide on how to configure security products to send their logs to Splunk or what are the recommended logs that should be sent, like the DSM guide in QRadar?
The operation of smartstore has been confirmed. I have a question regarding the 100GB of EBS attached to EC2. If you do not put the max_cache_size setting in indexes.conf, Will it freeze if the ca... See more...
The operation of smartstore has been confirmed. I have a question regarding the 100GB of EBS attached to EC2. If you do not put the max_cache_size setting in indexes.conf, Will it freeze if the cache is full to 100GB?   In another test, an EBS created with 10GB would freeze with a full capacity error if max_cache_size was not set.   What I would like to ask is whether if I don't set max_cache_size, it will stop when the volume becomes full.
I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are no... See more...
I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are not in the order I expect it to be. Table output: timestamp,Subject,emailBody,operation --> resulting JSON output is in the order subject,emailbody,operation,timestamp. How do I manipulate tojson to write fields in this order or is there an alternate way of getting json output as expected? 
Hi, I’m trying to enhance the functionality of the "Acknowledge" button in an Splunk IT Service Intelligence  episode. When I click on it, I want it to not only change the status to "In Progress" an... See more...
Hi, I’m trying to enhance the functionality of the "Acknowledge" button in an Splunk IT Service Intelligence  episode. When I click on it, I want it to not only change the status to "In Progress" and assign the episode to me, but also trigger an action such as sending an email or creating a ticket in a ticketing system I’m aware that automatic action rules can be set in aggregation policies, but I want these actions to occur specifically when I manually click the "Acknowledge" button. Is there a way to achieve this? Thanks!
probably a basic question i have the following data  600 reason and this rex (?<MetricValue>([^\s))]+))(?<Reason>([^:|^R]+)) what i am getting is 60 in Metric Value and 0 in Reason i presume th... See more...
probably a basic question i have the following data  600 reason and this rex (?<MetricValue>([^\s))]+))(?<Reason>([^:|^R]+)) what i am getting is 60 in Metric Value and 0 in Reason i presume that is due to the match being up to the next NOT space, thus metric value is 60 and 0 remains in the data for Reason what is the right way to do this such that i get value = 600 and reason = reason
How do I dedup or filter out data with condition? For example: Below I want to filter out row that contains name="name0".   The condition should be able to handle any IPs on the ip field because... See more...
How do I dedup or filter out data with condition? For example: Below I want to filter out row that contains name="name0".   The condition should be able to handle any IPs on the ip field because the IP could change, in the real data the IPs are a lot more.   The name0 is not in order. The dedup/filter should not be applied  to IPs that doesn't contain "name0" AND it should not be applied to unique IP that has "name0" Thank you for your help. Data: ip name location 1.1.1.1 name0 location-1 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.2 name0 location-20 1.1.1.3 name0 location-3 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0 Expected output: ip name location 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0       | makeresults format=csv data="ip, name, location 1.1.1.1, name0, location-1 1.1.1.1, name1, location-1 1.1.1.2, name2, location-2 1.1.1.2, name0, location-20 1.1.1.3, name0, location-3 1.1.1.3, name3, location-3 1.1.1.4, name4, location-4 1.1.1.4, name4b, location-4 1.1.1.5, name0, location-0 1.1.1.6, name0, location-0"    
Hello, I've recently upgraded to 9.3.0 and the file integrity check show that /opt/splunk/bin/jp.py doesn't need to be installed, so we deleted it. However the checker still complains about that fil... See more...
Hello, I've recently upgraded to 9.3.0 and the file integrity check show that /opt/splunk/bin/jp.py doesn't need to be installed, so we deleted it. However the checker still complains about that file. Is there a way to clear/reset the checker?
When I create a timechart using dashboard studio, the visualization only partially loads.  Until I click to open the visualization in a new window, then it loads as expected. We are on Splunk 9.0.5 ... See more...
When I create a timechart using dashboard studio, the visualization only partially loads.  Until I click to open the visualization in a new window, then it loads as expected. We are on Splunk 9.0.5 but I don't see any known issues about this.
Hello everyone,  I have a table (generated from stats) that has several columns, and some values of those columns have "X's".  I would like to count those X's and total them in the last column of ... See more...
Hello everyone,  I have a table (generated from stats) that has several columns, and some values of those columns have "X's".  I would like to count those X's and total them in the last column of the table.  How would I go about doing that?   Here is an example table, and thank you!   Field1 | Field2 | Field3 | Field4 | Field5 | Total_Xs X | X | Foo | Bar | X | 3 Foo2 | X | Foo | Bar | X | 2 X | X | X | Bar | X | 4    
Sometimes I set myself SPL conundrum challenges just to see how to solve them.  I realised I couldn't do something I thought would be quite straightforward.  For the dummy data below I want a single ... See more...
Sometimes I set myself SPL conundrum challenges just to see how to solve them.  I realised I couldn't do something I thought would be quite straightforward.  For the dummy data below I want a single row resultset which tells me how many events of each UpgradeStatus and how  many events in total i.e. Total Completed Pending Processing 11 6 3 2   I don't know in advance what the different values of UpgradeStatus might be and I don't want to use addtotals (this is the challenge part). I came up with the solution below which kinda "misuses" xyseries (which I'm strangely proud of) .  I feel like I'm missing a more straightforward solution, other than addtotals   Anyone up for the challenge? Dummy data and solution (misusing xyseries) follows...   | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | eventstats sum(count) as Total | xyseries Total UpgradeStatus count