All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm attempting to renew the certificate on a splunk proxy server.  According to our workflow after the certificate has been renewed I need to restart the nginx service.   My question is will restart... See more...
I'm attempting to renew the certificate on a splunk proxy server.  According to our workflow after the certificate has been renewed I need to restart the nginx service.   My question is will restarting the nginx service on the Splunk proxy server end any sessions in-between the user and the POD or do they have some persistence through a restart.  Even if there is only a brief outage it needs to be known for our documentation. thanks! 
I've been comparing two lookup files, but its more pure arithmetic, where I am trying to implement a true comparison where it looks to match values, and provide a percentage based on which matches we... See more...
I've been comparing two lookup files, but its more pure arithmetic, where I am trying to implement a true comparison where it looks to match values, and provide a percentage based on which matches were true. So if I have three values in file1, and only two values match in file2, the percentage would equal to 66.6%.   | inputlookup file1 | eventstats dc(Serial Number) as file1_Count | dedup file1_Count | inputlookup append=T file2 | eventstats dc(System Serial Number) as file2_Count | dedup file2 | fields file1_Count,file2_Count | eval percentage=round('file2_Count'/'file1_Count'*100,2)."%"  
Hello everyone, I'm trying to schedule an alert looking like this: index=network host=device1 | stats count by sourceip | where count > 2 (last 7 days). I will schedule it daily and I want it to ... See more...
Hello everyone, I'm trying to schedule an alert looking like this: index=network host=device1 | stats count by sourceip | where count > 2 (last 7 days). I will schedule it daily and I want it to search the last 7 days to see if an  IP is found more than 2 times and return events like the below:             sourceip         count 1   162.14.xxx.xxx       5 2   185.225.xxx.xxx    7 3   203.122.xxx.xxx    3 4   61.246.xxx.xxx       6 The problem is that the next day I don't want to see the same results if there is no new data from a new IP from the last 24h. So I need to add a condition that will only allow the search to return results if a new returned result (where count > 2) is added to the results last 24h. Do you have any suggestions? Thank you in advance.
Hi, a question from a high level of what goes on behind the scenes. I have an internal user who has written lots of handy macros that get chained together. The dashboards leveraging the macros use ... See more...
Hi, a question from a high level of what goes on behind the scenes. I have an internal user who has written lots of handy macros that get chained together. The dashboards leveraging the macros use a base query with panels that continue processing the base query result set. This user is hitting disk quota usage limits that other internal users do not hit. The macros perform a series of joins and appends along the way with 4 joins not being unusual. I'm wondering if the joins perhaps create multiple copies of the left join for each of any join along the way, requiring more disk space during processing stages even if the end result is "small".  The usage reported in the search does not match the sum total of the usage in the job inspection page so we are not sure what is consuming the space.  I just ran one example query of the chained macros, broken out to its query form in ad hoc search, and the end result was only 64k events that are small in size (less than 50 characters). So I guess my question(s) is: 1. Do joins require a lot of disk space usage from the user's quota? 2. Any tips on how to debug end user issues with disk quota usage?  
TL;DR: How would you approach adding multi-tenancy to SSE? Hi there, I am looking to use the Splunk Security Essentials (SSE) app on a search head (SH) that is peered with a bunch of other SHs ... See more...
TL;DR: How would you approach adding multi-tenancy to SSE? Hi there, I am looking to use the Splunk Security Essentials (SSE) app on a search head (SH) that is peered with a bunch of other SHs that have their own data. The app works fine, but it throws all the data it can find onto one pile and does its thing. What I'd like is to be able to set a SSE-wide extra query constraint (splunk_server=whatever) so that it would only look at data from that peered SH. This applies both to the original introspection, as well as the subsequent reports, and mapping to the MITRE framework. Best case scenario, I can add a drop-down to select the peer and now the app would work with data from that peer. Alternatively, I guess I could deploy a modified app for each peer that is then configured to look at that data only. I'm relatively new to Splunk ( hi :wave: ) but not so new to development, so I'm happy to roll up my sleeves. I was hoping that perhaps somebody with a good understanding of the app (there's a lot going on) could give me some pointers on the best way to tackle this. thanks in advance for your input, much appreciated  : ) joost
Let's say, we have 3 different events ( 2 with Failure messages and 1 with reconfigured message) based on the service name and timestamp. Event 1 :  2022-07-25 08:29:38.516 service_name=addtocart... See more...
Let's say, we have 3 different events ( 2 with Failure messages and 1 with reconfigured message) based on the service name and timestamp. Event 1 :  2022-07-25 08:29:38.516 service_name=addtocart message=failure Event 2: 2022-07-25 08:29:35.516 service_name=addtocart message=reconfigured Event 3: 2022-07-25 08:29:30.516 service_name=addtocart message=failure Output should be as : Which service is failed and not reconfigured again based on the latest timestamp _time service_name message 2022-07-25 08:29:38.516 addtocart failure
Experts, I have below XML Dashboard code. The first panel displays calender heat map using d3 js library as shown below. My requirement is, when a day on the calender is clicked, that date v... See more...
Experts, I have below XML Dashboard code. The first panel displays calender heat map using d3 js library as shown below. My requirement is, when a day on the calender is clicked, that date value needs to be passed as token to the second panel.  I could not get this working. Could you please help me ? <form script="autodiscover.js" hideChrome="true" > <label>Monthly Utilization</label> <row> <panel> <title>Rolling Average</title> <html> <div id="search1" class="splunk-manager" align="center" data-require="splunkjs/mvc/searchmanager" data-options='{ "search": { "type": "token_safe", "value": "source=...........| timechart span=1d values(AverageVAL) as \"Average VAL\""}, "cancelOnUnload": true, "preview": true }'> </div> <div id="heat_map" class="splunk-view" align="center" data-require="app/MFDashboard/calendarheatmap/calendarheatmap2" data-options='{ "id" : "fcal", "managerid" : "search1", "domain" : "month", "subDomain" : "day" }'> </div> </html> <drilldown> <set token="Selectedday">$click.value$</set> </drilldown> </panel> </row> <row> <panel depends="$Selectedday$"> <html> <p> Testing... </p> </html> </panel> </row> </form>
I have a field called RenderedMessage in event log which has the following text Task finished:  TaskID 1 for branch 6000 I have been given the task to alert in an email all the branches that has th... See more...
I have a field called RenderedMessage in event log which has the following text Task finished:  TaskID 1 for branch 6000 I have been given the task to alert in an email all the branches that has the tasked finished. In my search, I am able to get the events for this task as index=prod | spath RenderedMessage | search RenderedMessage="*Task finished: ColleagueNextWeekTask*" How shall I extract only the branch values from this events/message? I need only the 6000 from this. Thank you.
Hello, I'm troubleshooting a possible problems with the dbconnect app. We setup a dbinput that indexing with 90 second frequency, raising column by time. We have to index these data fairly frequent... See more...
Hello, I'm troubleshooting a possible problems with the dbconnect app. We setup a dbinput that indexing with 90 second frequency, raising column by time. We have to index these data fairly frequently for monitoring. At around 4PM, monitoring team told me that data has stop indexing, I check indexing log and found no bug The log said input_mode=tail events=0, and repeat in 30 minutes until we have normal index log with rising column check point again. I checked with SQL and the we do have data in between those time.So I want to pinpoint the root problems so we don't encounter this again. Is this the problems with networking, oracleBD, or is this from splunk (I highly doubt it because I don't do anything and it continue indexing 30 min after)
| rex "^(?\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s*\[(?[^\]]*)\]\s*\[(?[^\]]*)\]\s*\[(?[^\]]*)\]\s*[^\[]+\s\[(?[^\]]+)" | search Log_level="ERROR" | where Process != "" | stats count AS ERRORS by Pr... See more...
| rex "^(?\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s*\[(?[^\]]*)\]\s*\[(?[^\]]*)\]\s*\[(?[^\]]*)\]\s*[^\[]+\s\[(?[^\]]+)" | search Log_level="ERROR" | where Process != "" | stats count AS ERRORS by Process | sort - count asc     i have above query to help get ERROR count of our processes, but I want to get the daily average of the number of errors generated by each process between a certain time interval.. lets say from 6am to 6pm from monday to friday, How can I achieve this
I have a dashboard that only for some users (seems to be some new ones or long returning ones), is returning an "Action Forbidden." error message on panels. I have checked access permissions, but the... See more...
I have a dashboard that only for some users (seems to be some new ones or long returning ones), is returning an "Action Forbidden." error message on panels. I have checked access permissions, but there are no differences to other users who are not receiving this error. I have also checked the enterprise docs, but can't find reference to this error message. Dashboard Panel error message below. Any help would be appreciated
Data Model (simplified): - numeric value "Hours" - numeric value "StartTime" (assumed to always have time be 00:00:00) in UnixTime - numeric value "EndTime" (same assumptionm as above) in UnixT... See more...
Data Model (simplified): - numeric value "Hours" - numeric value "StartTime" (assumed to always have time be 00:00:00) in UnixTime - numeric value "EndTime" (same assumptionm as above) in UnixTime - calculated from the above two: time period as UnixTime value - calculated: "Hours" per day - string value (cathegorical) "Group"   Goal: get a List of Days where each day contains: - the respective date - the "Hours per Day" value assigned to a field named after the Group   Intention: create a vizualisation showing what group is needed how much at what time
I want to make a report about how many alerts fired in a day. I saw in the job inspection I want all of these info, owner, apps, event, size and runtime. It's to determine how many alert overl... See more...
I want to make a report about how many alerts fired in a day. I saw in the job inspection I want all of these info, owner, apps, event, size and runtime. It's to determine how many alert overlapping each other, how many times that alert triggered. Prefer in SPL. Basically, I want for these information to help me make a detail report about alerts in our system.
Got this error on the search head, Please help us to resolve this . > Search peer xxxxxx has the following> message: The metric> value=0.00003393234971117585 provided> for > source=/opt/splunkforwa... See more...
Got this error on the search head, Please help us to resolve this . > Search peer xxxxxx has the following> message: The metric> value=0.00003393234971117585 provided> for > source=/opt/splunkforwarder/var/log/splunk/metrics.log,> sourcetype=splunk_metrics_log,> host=xxxxx, index=_metrics is not a> floating point value. Using a> "numeric" type rather than a "string"> type is recommended to avoid indexing> inefficiencies. Ensure the metric> value is provided as a floating point> number and not as a string. For> instance, provide 123.001 rather than> "123.001"
Hi everyone,  After upgrading heavyforwarder to ver 9 , we've  encountered following error "Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1219. Messa... See more...
Hi everyone,  After upgrading heavyforwarder to ver 9 , we've  encountered following error "Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1219. Message from 60F7CA48-C86F-47AD-B6EF-0B79273913A8:172.20.161.1:55892" .  Could you please assist to resolve the issue ?
Hope you are doing great.  Again facing a challenging and seeking some help. Prob statement  We have 200 windows server out of which 3 devices and not reporting suddenly. I tried to check the... See more...
Hope you are doing great.  Again facing a challenging and seeking some help. Prob statement  We have 200 windows server out of which 3 devices and not reporting suddenly. I tried to check the output.conf and server.conf it looks looks fine and I also compare those files with the working server.  Everything is fine. And yes I check the status of the non reporting server it is showing up and running and while using TTL the server is responding not Im unable to get the data on splunk. I don't have much idea what could be the root cause it will be great if you could suggest me something..  Note: Splunk installed on  on-prem  Thanks  Debjit   
Hi Team, We are unable to get the alert emails even when the events matching the alert condition is present in Splunk cloud. Please help how we can resolve this?
My add-on is tag.gz is created with local folder when I export it When I extract it it is created with local folder   When I'm trying to upload the add-on tar.gz I get this message ... See more...
My add-on is tag.gz is created with local folder when I export it When I extract it it is created with local folder   When I'm trying to upload the add-on tar.gz I get this message What is the problem? Thanks in advance, Amir
Hi All, How can I build a use case and get notified in Splunk when a user does not swipe his/her access card at the door but is logged into the domain? Please help.
Hi I have a task to display the Status of two of the urls in the following table format : URL Name In Usage Status http://lonmd1273241:4001/gmsg-mds/ Yes Up http://sfomd12... See more...
Hi I have a task to display the Status of two of the urls in the following table format : URL Name In Usage Status http://lonmd1273241:4001/gmsg-mds/ Yes Up http://sfomd1273241:4001/gmsg-mds/ No Up   The http://lonmd1273241:4001/gmsg-mds/ is printed in live logs for the application and the http://sfomd1273241:4001/gmsg-mds/ is not printed in logs. Can someone please help with a query to cretae such a table in a dashboard. Status code is also printed in logs  for http://lonmd1273241:4001/gmsg-mds/  which is used to display status column. Any help with query to produce such a dashboard would be help full