All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers,   Iam a beginner at splunk. So I managed to get all Data from Aida64 into Splunk. That does include Temperatures, Mhz Clocks of all Cores, TDP-Values etc. Now I wanted to try to get ... See more...
Hi Splunkers,   Iam a beginner at splunk. So I managed to get all Data from Aida64 into Splunk. That does include Temperatures, Mhz Clocks of all Cores, TDP-Values etc. Now I wanted to try to get a nice timechart from the average CPU Power usage of the System per Minute. The Problem is that all the values are entering splunk with a timeframe of around 2-4 seconds from the system.  Here is an example: The field CPU_power is the necessary one which I want to have a timechart from. A normal timechart search was not possible for me, because I only get it managed to show all the values at per line in the timechart. Another try was then to sum all values in a minute together and divide them by the amount of counts per minute. But there I get not result per minute then... Actually I dont know how to manage this.   Hopefully you can help me out here.
As seen in Solved: How to establish secure connection between Univers... - Splunk Community there are ways to secure the connection between the forwarder and indexer. This is to stop unauthorized us... See more...
As seen in Solved: How to establish secure connection between Univers... - Splunk Community there are ways to secure the connection between the forwarder and indexer. This is to stop unauthorized users from forwarding to the Splunk Indexer, and managing the other splunk components. More detailed steps on ssl , and some token can be seen here for stopping unauthorized components to connect.   However, this does not stop the forwarder for sending rubbish data to the indexer,  is there any way that the forwarder or some component can packet inspect the data, and stop those rubbish data or strange data from sending to the indexer?  
We are excited to announce the release of Splunk Enterprise 8.2. You can now seamlessly access all of your Splunk data with Federated Search for a unified search experience across all of your deploym... See more...
We are excited to announce the release of Splunk Enterprise 8.2. You can now seamlessly access all of your Splunk data with Federated Search for a unified search experience across all of your deployments whether they are on-premises or in the cloud. Extracting and communicating your insights is now easier than ever before with Dashboard Studio, the new and intuitive dashboard-builder for creating visually-compelling dashboards with advanced visualization tools and fully customizable formats. We also are introducing numerous new data management capabilities for improved environment monitoring and enhanced self-service and auditing. Admins can leverage a variety of new apps including the Python 3 Readiness App, Knowledge Object Overview App, and What’s New in 8.2 App. Plus, you can easily bring Splunk to any part of your organization with the Splunk Operator for Kubernetes that quickly deploys Splunk Enterprise on your choice of private or public cloud provider while automating workflows and implementing Kubernetes best practices. Read the blog to learn more! Update today to see them in action or consider Splunk Cloud Platform to enjoy uninterrupted service delivery of the most innovative and up-to-date features. 
Hello, This is my first question here, since I don't know how to look for the solution. I tried to resolve this case on the past three days, but I was not able to. From the NJMON (JSON format for N... See more...
Hello, This is my first question here, since I don't know how to look for the solution. I tried to resolve this case on the past three days, but I was not able to. From the NJMON (JSON format for NMON) I got some multilevel fields and I'm trying to perform a graph to display the cpu utilization of sys, user, wait versus total (sys+wait+idle+user). The problem is not with the math, but how to group this information by CPU number level. This is how I got the structured data from NJMON: cpu_physical: { [-]      cpu0: { [-]        idle: 0        sys: 0        user: 0        wait: 0      }      cpu1: { [-]        idle: 0        sys: 0        user: 0        wait: 0      }      cpu10: { [-]        idle: 0        sys: 0        user: 0        wait: 0      } So, basically, I have the field: cpu_physical.cpu0.idle, cpu_physical.cpu0.sys, cpu_physical.cpu0.wait, cpu_physical.cpu0.user, cpu_physical.cpu1.idle, cpu_physical.cpu1.wait, cpu_physical.cpu1.sys, cpu_physical.cpu1.user ... I could have in the same System 4 CPUs to 64 CPUs, so I cannot perform that only using | eval CPU0.idle+CPU0.wait+.... Basically, this is what I need as the final output in a table format: CPU#  (Idle)  (Wait)  (Sys)  (user) CPU0   10%  12%     13%    16% CPU1  15%   15%     44%     67% CPU2    XX%  16%     X%     X% CPU3  XY%     X%       ...          ...     So, with this results, I would be able to calculate the Wait+User+System / Wait+User+System+Idle per CPU.   Any hints or ideas about how to make this? Sorry if the text is quite confusing, I don't know how to explain it shortly and maybe this is why I was not able to find the solution.   Thank you very much!    
Hi There,       I try to install this app https://splunkbase.splunk.com/app/5278/ and configure in HF to get rid of CIM compliant for CASB data. Installation was successful but app is not loading in... See more...
Hi There,       I try to install this app https://splunkbase.splunk.com/app/5278/ and configure in HF to get rid of CIM compliant for CASB data. Installation was successful but app is not loading in HF to update the configuration as i am getting white blank screen only.  - I have restarted Splunk HFs and the SH - Splunk version is compliant with the app version Can someone help me to fix this issue ? Thanks.
I have a single user that is being affected by a strange issue where they are able to search, however the event table returns no content: If this user submits a search the URL appears to be ma... See more...
I have a single user that is being affected by a strange issue where they are able to search, however the event table returns no content: If this user submits a search the URL appears to be malformed: https://splunkinstance.site:8000/en-US/app/search/search?dispatch.sample_ratio=1&display.events.fields=%5B%22host%22%2C%22source%22%2C%22sourcetype%22%2C%22callerIpAddress%22%2C%22category%22%2C%22tag%22%2C%22tag%3A%3Aeventtype%22%2C%22ms_Mcs_AdmPwd%22%2C%22user_login%22%2C%22user_caps%22%2C%22object_type%22%2C%22object_name%22%2C%22hist_ip%22%2C%22signature%22%2C%22error%22%2C%22sender%22%2C%22mail%22%5D&display.events.list.wrap=1&display.events.maxLines=5&display.events.rowNumbers=0&display.events.table.wrap=1&display.events.type=list&display.general.type=events&display.page.search.mode=verbose&display.page.search.tab=events&display.prefs.events.count=10&workload_pool=&q=search%20index%3Dweb&earliest=-15m&latest=now&sid=1620911471.38537_9F0273CD-B076-4A00-B73C-8A9CFED6A82A   While if I issue the same search my URL: https://splunkinstance.site:8000/en-US/app/search/ search?earliest=-15m&latest=now&q=search%20index%3Dweb&display.page.search.mode=verbose&dispatch.sample_ratio=1&workload_pool=&sid=1620911758.72975_80C75F5F-836C-4502-ADC6-6F26EF89DE77   The users GET requests appear to be recalling an extensive field list even when a single index is searched: index=web | fields index     I've confirmed the issue with the user in different browsers, as well as that the user has correct permissions. 
How to best configure Splunk Security Essentials app? How to enable ALL use cases for SOC team use? It is integrated with ES already, but I don't see much use cases showing up. Please advise.
Sorry to ask this question if it has been talked about before - I have a Splunk ES installation that we use the "Incident Review" to keep track of incidents and notable events.  As part of this we ha... See more...
Sorry to ask this question if it has been talked about before - I have a Splunk ES installation that we use the "Incident Review" to keep track of incidents and notable events.  As part of this we have a requirement to report an SLA for all notable events.  What we have found is that we can build a search that returns the incident information and link to the notable event (with no problem) but when we then use the search to look at the status_label to see the time the event was logged to the time that it was closed can be an issue if someone then makes a note on an notable after the incident was closed. Are you able to help me by providing the SLA to only read the first time that we see the notable event status change to closed and ignore all the following status reports of closed to make the SLA work. `notable` | search NOT `suppression` info_search_time=* | eval review_time=mvindex(review_time,0) | eval response_time=review_time-info_search_time | eval still_open=if(status_group!="Closed",now()-info_search_time,null) | eval closed=if(status_group="Closed",1,0) | eval in_sla=case((urgency=="critical" AND still_open>(3600*2)),1,(urgency=="high" AND still_open>(3600*8)),1,(urgency=="medium" AND still_open>(3600*72)),1,(urgency=="low" AND still_open>(3600*120)),1,(urgency=="informational" AND still_open>(3600*144)),1,1=1,0) | eval metric_count=case((urgency=="critical" AND (response_time>(3600*2) OR in_sla=1)),1,(urgency=="high" AND (response_time>(3600*8) OR in_sla=1)),1,(urgency=="medium" AND (response_time>(3600*72) OR in_sla=1)),1,(urgency=="low" AND (response_time>(3600*120) OR in_sla=1)),1,(urgency=="informational" AND (response_time>(3600*144) OR in_sla=1)),1,1=1,0) | stats count sum(metric_count) as metric_met, sum(closed) as closed sum(response_time) as response_sum, avg(response_time) as response_avg, max(response_time) as response_max count(still_open) as open, avg(still_open) as avg_open max(still_open) as max_open sum(in_sla) as sla_ok by urgency | appendpipe [inputlookup urgency_list.csv] | dedup urgency | eval SLA=case(urgency=="critical","4",urgency=="high","8",urgency=="medium","72",urgency=="low","120",urgency=="informational","144") | eval "SLA Compliance"=round((metric_met*100/count),2), response_avg=tostring(round((response_avg),0),"duration"), response_max=tostring(round((response_max),0),"duration"), avg_open=tostring(round((avg_open),0),"duration"), max_open=tostring(round((max_open),0),"duration"), overdue=open-sla_ok | eval count=tostring(count,"commas"), closed=tostring(closed,"commas"), open=tostring(open,"commas"), sla_ok=tostring(sla_ok,"commas"), overdue=tostring(overdue,"commas") | table urgency, SLA, count, "SLA Compliance", closed, response_avg, response_max, open, avg_open, max_open, sla_ok, overdue | sort SLA | eval urgency=upper(substr(urgency,1,1)).substr(urgency,2) | fillnull value="0" count closed open sla_ok overdue | fillnull value="100.0" "SLA Compliance" response_avg response_max avg_open max_open | rename urgency as Urgency, SLA as "SLA Target (Hours)", count as "Total Notables", closed as "Closed Notables", response_avg as "Avg. Time to Close (HH:MM:SS)", response_max as "Max. Time to Close (HH:MM:SS)", open as "Open Notables", avg_open as "Avg. Time Open (HH:MM:SS)", max_open as "Max. Time Open (HH:MM:SS)", sla_ok as "Within SLA", overdue as "Overdue"
Hello my friends, I'm new to splunk and still trying to figure out the ins and outs of everything. I have a report that's been handed down to me that I'm trying to improve on it's timing. The report... See more...
Hello my friends, I'm new to splunk and still trying to figure out the ins and outs of everything. I have a report that's been handed down to me that I'm trying to improve on it's timing. The report runs at 3 AM EST every day, but sends out at 8 PM that night. I suspect this has to do with the amount of data the report is trying to collect and then display. To fix that, I want to trim down how much data we are building, but I don't know if it will actually help. What we are currently showing is how many times each of our 320 dispensers are dispensing, how many of each error they are getting, the error rate, and their total # of errors. What I am thinking of doing is focusing on the total # of errors. Specifically, if a dispenser returns 0 errors, it's not in the report. Currently, the TotalErrors column is evaluated like this : | eval TotalError=(Error1+Error2+Error3) My thoughts are to evaluate against TotalError and if the # of errors = 0, not to display that dispenser.
I created a first Java program with Splunk SDK and set scheme to "http". But I haven't this "Exception in thread "main" java.lang.RuntimeException: Connection reset".       ServiceArgs serviceArg... See more...
I created a first Java program with Splunk SDK and set scheme to "http". But I haven't this "Exception in thread "main" java.lang.RuntimeException: Connection reset".       ServiceArgs serviceArgs = new ServiceArgs(); serviceArgs.setUsername("admin"); serviceArgs.setPassword("password"); serviceArgs.setHost("192.168.1.88"); serviceArgs.setPort(8089); serviceArgs.setScheme("http"); Service service = Service.connect(serviceArgs); Args oneshotSearchArgs = new Args(); InputStream resultsStream = service.oneshotSearch("search index=_internal | head 5",oneshotSearchArgs);      
Hi, We have installed and configured Splunk in a Linux machine with the objective of receiving data from an AD in a Windows Server 2019. After installing the "Splunk Universal Forwarder" and followi... See more...
Hi, We have installed and configured Splunk in a Linux machine with the objective of receiving data from an AD in a Windows Server 2019. After installing the "Splunk Universal Forwarder" and following the steps in the documentation we see the following output with the netstat command: "splunk:8089 SYN_SENT". The Splunk installed in the Linux machine has the "Splunk Add-on for Microsoft Windows" and both services (the UF in the Windows machine too) were restarted after adding it. Then, when the "Data Inputs - Windows Event Logs" option is selected we can see the following error: "Select Forwarders This feature is not available with your installed set of licenses" Therefore, we can't receive any logs. Are we missing something here?
Good day,   Can anyone confirm if the Jenkins trigger app is going to be updated?  I tried using the webhooks plugin, but it seems to use an outdated python urllib library and not the python reque... See more...
Good day,   Can anyone confirm if the Jenkins trigger app is going to be updated?  I tried using the webhooks plugin, but it seems to use an outdated python urllib library and not the python requests library.  At this point, i cant seem to trigger a action webhook to Jenkins from Splunk? Thanking you kindly.
Hello, We have PostgreSQL cluster in our own local environment and  our monitoring tool in this environment is Splunk. We monitor our PostgreSQL cluster in Splunk. Currently, we created our own mon... See more...
Hello, We have PostgreSQL cluster in our own local environment and  our monitoring tool in this environment is Splunk. We monitor our PostgreSQL cluster in Splunk. Currently, we created our own monitoring using `Collectd` and `Splunk UF` and created our custom scripts for data fetching. We have created our own dashboards and panels. We are looking for something like Elasticsearch,metricbeat and filebeat, which provide built-in metric collection and dashboards. We are using Collectd Postgresql plugin (https://docs.signalfx.com/en/latest/integrations/agent/monitors/collectd-postgresql.html), but it lacks many PostgreSQL metrics. We have searched for splunk applications for better Postgresql monitoring, but all the add-ons and apps are archived. Do you know if there is any built-in monitoring for PostgreSQL in Splunk? Or if anyone wrote any searches and dashboards for Postgresql? Thanks!
Hello, i am new to the splunk. currently i am trying to send one alert to a website(loacted as local host). is there any way i can do that with workflow? can someone show me the steps? thanks a ... See more...
Hello, i am new to the splunk. currently i am trying to send one alert to a website(loacted as local host). is there any way i can do that with workflow? can someone show me the steps? thanks a lot.
Hi  Can someone help me with the query for the below requirment i have User A, User B, User C and so onn with the job status as Inprogress,To Do, Done Need to list the jobs assigned to all t... See more...
Hi  Can someone help me with the query for the below requirment i have User A, User B, User C and so onn with the job status as Inprogress,To Do, Done Need to list the jobs assigned to all the users in the form of bar chart  i.e) may be USer A has job status as inprogess, to do  User A  -- Inprogress                     To do  User B -To Do                   Done     Thanks
Hello my gorgeous people from Splunk I hope everyone is keeping their sanity during this hard times! I was wondering how to properly code on Splunk how to obtain the distinct count of events per int... See more...
Hello my gorgeous people from Splunk I hope everyone is keeping their sanity during this hard times! I was wondering how to properly code on Splunk how to obtain the distinct count of events per interval of x  units of time passed  and also the average of those intervals during the working week. Let's start first will the first scenario: I work for a Hotel chain corp. due to "rona" our prices for rooms has dropped, and we are receiving a lot of calls and if any of these calls are left unattended for more than 2 minutes I have to mark them as "UNHAPPY_CX" I calculate this manually like so:   search index="calls" | eval arrival_time=time_in_queue_call | eval picked_up_time=call_taken | eval call_code=call_id | eval ID=cx_info | time_now=now() | eval time_unattended=if(isnull(call_taken-time_in_queue_call), time_now-time_in_queue_call,call_taken-time_in_queue_call) | eval class=if(time_unattended>120,"UNHAPPY_CX","HAPPY_CX") | stats values(call_code) as code values(class) as class by ID    but I would like to create intervals of x units of time such as intervales of one hour of two hours or perhaps three that will start from lets say 8:00 am EST to 9:00 am EST and distinct count the numbers of class "UNHAPPY_CX" by call_code ("code") and have this on a table -- like this: Let's say that this is the raw data code ID class date_time DYUDJ 1 UNHAPPY_CX 8:01 CBY 2 UNHAPPY_CX 8:06 XCGH 3 UNHAPPY_CX 9:01 OJ64 3 UNHAPPY_CX 10:41 5677H 4 UNHAPPY_CX 10:45 567F 5 UNHAPPY_CX 11:05   I want to be able to have something like this  dc(code) Interval 2 8:00-9:00 1 9:00-10:00 3 11:00-12:00   I was thinking that I was going to account for a field with the time of each call and perhaps arrange them in chronological order and maybe create a new field filtering by day and hour but I dont know if splunk has a faster more efficient way of doing this... The closest answer I came across is a time chart but I want something in the form of a table...  also I want to be able to calculate in the future the mean of dc(code) for the past x days... So I am truly in debt and so thankful to anyone that can shed a bit of light on this problem, thank you so much guys!
Is it possible to send over Kubernetes/Racher/RKE health statuses/checks over to Splunk to create some kind of dashboard for visualization. We have utilized the fluentd feature to send over logs to s... See more...
Is it possible to send over Kubernetes/Racher/RKE health statuses/checks over to Splunk to create some kind of dashboard for visualization. We have utilized the fluentd feature to send over logs to splunk but we'd also like to create a dashboard to show the health status of the clusters and services. 
Need to trigger an alert when a process is not running, here is my query but I can not the alert to work index="os" source="Perfmon:Process" host="vm* process_name="RT*" | dedup host process | join ho... See more...
Need to trigger an alert when a process is not running, here is my query but I can not the alert to work index="os" source="Perfmon:Process" host="vm* process_name="RT*" | dedup host process | join host [search index="os" source=Perfmon:Process host="vm*" process_name="RT*" | stats latest(host) latest(_time) by host |eval lastSeen='latest(_time)'|fields host lastSeen] |eval status=if(lastSeen<(_time - 300), "not running","running") |table host status process_name | search status = "not running" or is there another way to look for windows process not running thank you
Hey There,  I have seen the Splunk. com answers and the rex cheat sheets online. However, I cant seem to get rex command to work to extract what I need from the data. I only need the XX_LMP_12345678... See more...
Hey There,  I have seen the Splunk. com answers and the rex cheat sheets online. However, I cant seem to get rex command to work to extract what I need from the data. I only need the XX_LMP_123456789_123 without the .pdf.  Can someone guide me on how to achieve this?  just need XX_LMP_123456789_123  failureMsg="Failure to populate pdf file for XX_LMP_123456789_123.pdf in LOB_1234567_9_4567890_delivery_.pdf"