All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi There, I am trying to get the an hourly stats for each status code and get the percentage for each hour per status. Not sure how to get it. my search | | bucket _time span=1h | stats count by... See more...
Hi There, I am trying to get the an hourly stats for each status code and get the percentage for each hour per status. Not sure how to get it. my search | | bucket _time span=1h | stats count by _time http_status_code | eventstats sum(count) as totalCount | eval percentage=round((count/totalCount),3)*100 please suggest which command could be helpful here.  thanks
hi     | table "Start connexion" "End connexion"     The result of my search display a table with a suite of 2 dates  Instead a table, I need to display this results in a line chart H... See more...
hi     | table "Start connexion" "End connexion"     The result of my search display a table with a suite of 2 dates  Instead a table, I need to display this results in a line chart How to do please?
| lookup update=true SpamIntel_by_email_subject subject OUTPUT | lookup update=true SpamIntel_by_email_subject_wildcard subject OUTPUTNEW What is update=true ? What field it is comparing to upda... See more...
| lookup update=true SpamIntel_by_email_subject subject OUTPUT | lookup update=true SpamIntel_by_email_subject_wildcard subject OUTPUTNEW What is update=true ? What field it is comparing to update what ? Whats the difference between between Output and Outputnew I didn't understand from the splunk's documentation well.
In the classic Dashboard environment we can specify a 'valuePrefix' for an input. In the Dashboard Studio GUI editor there does not seem to be such an option. Is there a way to do this the using ... See more...
In the classic Dashboard environment we can specify a 'valuePrefix' for an input. In the Dashboard Studio GUI editor there does not seem to be such an option. Is there a way to do this the using code editor?   Thanks
Where can i get the splunk Universal Forwarder 7.1.0. In the splunk portal they have removed all the older releases. please help me in this.
Hi All, I have created a newly created field/field alias/field extraction with GLOBAL Permissions. Example | eval test="MyApp" This works fine when I use it in search but when I save it as ca... See more...
Hi All, I have created a newly created field/field alias/field extraction with GLOBAL Permissions. Example | eval test="MyApp" This works fine when I use it in search but when I save it as calculated field it doesn't show up.  I refreshed 10 times even cleared browser cache and logged back in. Still same issue. We don't see newly created KO in the logs but can run those in searches. Any inputs or help  @woodcock @Splunkers 2022 
Hi, I am trying to create a alert for cpu usage by using below query, index=os host=cbtsv | stats latest(*) as * by host | table _time cpu_load_percent cpu_user_percent | eval CPU=cpu_load_per... See more...
Hi, I am trying to create a alert for cpu usage by using below query, index=os host=cbtsv | stats latest(*) as * by host | table _time cpu_load_percent cpu_user_percent | eval CPU=cpu_load_percent+cpu_user_percent|stats avg(CPU) as percent by host here Ii am trying to add 2 fields (CPU=CPU load + cpu user) but it is not giving results as expected I want an alert to be triggered when Avg value of CPU=(cpu_load + cpu user) exceeds 90%. How do I set the alert to meet the conditions above? Final output like Timestamp Hostname CPU Status 28/02/2022 21:58:00 cbtsv 90% Critical  
Hi, I need to filter my query for a specific field_value. The working query is as follow: index=_index (field_value="value1" OR field_value="value4" OR field_value="value14") | ..... Now, I wo... See more...
Hi, I need to filter my query for a specific field_value. The working query is as follow: index=_index (field_value="value1" OR field_value="value4" OR field_value="value14") | ..... Now, I would like to retrieve those field values from a remote json file and pass it to search condition this way: index=_index (field_value in listOfFieldValuesFromRemoteJson) | ..... Could you please help me make it work? The json file : [{"field_value": "value1"},{"field_value": "value4"},{"field_value": "value14"}] Best regards, Dhiaeddine
Hi,  i am trying to create an add on that runs a powershell script to perform some actions. Since i dont want to hardcode a path i would like to access $SPLUNK_HOME within my powershell script. afa... See more...
Hi,  i am trying to create an add on that runs a powershell script to perform some actions. Since i dont want to hardcode a path i would like to access $SPLUNK_HOME within my powershell script. afaik $SPLUNK_HOME gets set as environment variable upon script start. so im using (get-item env:\SPLUNK_HOME).value  Is there a way to "test run" my scripts with a splunk environment set?
Hi All,  can anyone recommend a feed or app that can fetch the reputation score /threat score of an IP and save it in a field ?   The APP for VirusTotal does not fetch the score of IP.  Have already ... See more...
Hi All,  can anyone recommend a feed or app that can fetch the reputation score /threat score of an IP and save it in a field ?   The APP for VirusTotal does not fetch the score of IP.  Have already tried it.
hi, i know many have answered this question before but i didn't find any perfect and detailed answer. Setup :- UF ---> HF ----->  IDX Q1. i have a file called test.txt ( Location :- /send/te... See more...
hi, i know many have answered this question before but i didn't find any perfect and detailed answer. Setup :- UF ---> HF ----->  IDX Q1. i have a file called test.txt ( Location :- /send/test.txt ). i want to send the txt file from to UF to HF to IDX.                ( Receiving  ports 9997 is open and configured for all as well )         How do i do it ?
Hi  I have a panel with query below index=int_166167 env = SIT appName="GCR" message="Post Login*"| bucket _time span= 15m| stats count(userId) as loginUsers ,min(timeTaken) as minSLA,max(timeTak... See more...
Hi  I have a panel with query below index=int_166167 env = SIT appName="GCR" message="Post Login*"| bucket _time span= 15m| stats count(userId) as loginUsers ,min(timeTaken) as minSLA,max(timeTaken) as maxSLAcount by _time | sort -_time|table  _time,loginUsers,minSLA,maxSLA the panel appears as like below time loginUsers minSLA maxSLA 28-02-2022 11:00 45 12 67 28-02-2022 11:15 60 13 74 28-02-2022 11:30 35 25 82 28-02-2022 11:45 46 34 45 28-02-2022 11:00 70 57 90 28-02-2022 12:00 35 24 57 My requirement is like on click of the maxSLA value (for ex:90) it should link to search which shows the result of particular one max SLA event with 90 from those 70 users Kindly help on this.   
We utilise Enterprise Security and have a large number of detections that we use.  We have recently put in some testing hardware that could trigger any one of these alerts and I am trying to find out... See more...
We utilise Enterprise Security and have a large number of detections that we use.  We have recently put in some testing hardware that could trigger any one of these alerts and I am trying to find out if there is someway that we could suppress or exclude a device if that host potentially triggered these rules.  Is there a way to effectively do a global "ignore any alerts from xxxx" without having to edit every single rule?
We are currently auditing the OSB Splunk user access accounts for both of our instances. Unfortunately Splunk doesn't show or display under its user settings when an account has been disabled from... See more...
We are currently auditing the OSB Splunk user access accounts for both of our instances. Unfortunately Splunk doesn't show or display under its user settings when an account has been disabled from LDAP (at the moment all the accounts are shown as 'Active'). Also, since user authentication has been configured by using LDAP, can you please confirm or advise why is not possible to display the status of an account in Splunk UI as disabled when they have been disabled in LDAP?
So I want to create an alert if one of our server is not connected, but the server disconnects automatically for every 12 hours and reconnects again in a few minutes, so I only need an alert triggere... See more...
So I want to create an alert if one of our server is not connected, but the server disconnects automatically for every 12 hours and reconnects again in a few minutes, so I only need an alert triggered if the server does not reconnect within 10minutes of it getting disconnected. *SourceName="AppLog" Message="service status *"* there are two logs that occur, one is service status started and service status stopped. I need the alert triggered only if the service status started log does not appear within 10min of service stopped log message
Hi, I hope this is the right board to ask these questions, apologies if it's not: I have two issues with my account preferences at the moment. I work for a large organsiation and there is dedicat... See more...
Hi, I hope this is the right board to ask these questions, apologies if it's not: I have two issues with my account preferences at the moment. I work for a large organsiation and there is dedicated team looking after the Splunk Cloud platform we use here 1. A while ago, my search formatting preferences changed somehow so that search terms like stats no longer get colours highlighting themselves to distinguish them in my searches. Now they are just plain text. I don't think I did anything to cause this. 2. I then realised that my default app preference is not being applied when I start Splunk. Is this something I can fix, or do I need my admins to do it - they are all so busy right now, I don't want to hassle them with something minor like this. My preferences all look like they are set correctly for both these things to work. Thanks for any assistance, I know you are all probably busy too! Regards, John
I have a search done on splunk and I need to take the output I receive and multiply it by 2.    My search query is:  index=app1 AND service=app AND logLevel=INFO AND environment=staging "messag... See more...
I have a search done on splunk and I need to take the output I receive and multiply it by 2.    My search query is:  index=app1 AND service=app AND logLevel=INFO AND environment=staging "message.eventAction"=COMPLETE_CREATE | stats dc(message.userId) Upon using this search, I receive a distinct count of 8, but I want that number to multiply by 2. I cannot seem to figure out how to do this after reading other similar searches. I hope someone can help, it seems like it should not be this difficult for a simple multiplication. 
Hello Team,  I have a lookup table with 1000 employees data into it, like email, id and other  I have an search which also produces the same result like employee email, id, and status  I want to ... See more...
Hello Team,  I have a lookup table with 1000 employees data into it, like email, id and other  I have an search which also produces the same result like employee email, id, and status  I want to combine both of them so my search produces data only for employees who are in lookup table  I tried passing lookup but its fetching all data    this is what I am using "EmployeeEmail is an term in lookup table" index=Employeedata sourcetype=data |lookup InT_EM as EmployeeEmail |table EmployeeEmail, status  
We are getting the DHCP logs from Windows.  I am trying to ingest these logs into UBA however UBA requires the lease_duration.  Where is this field... None of my logs has this or the alert code for i... See more...
We are getting the DHCP logs from Windows.  I am trying to ingest these logs into UBA however UBA requires the lease_duration.  Where is this field... None of my logs has this or the alert code for it.  
I am looking for a best way to prepare for disaster recovery to a remote site. We have 5 nodes indexer cluster and wanted to get the best backup and  strategy so I can create a daily backup that ca... See more...
I am looking for a best way to prepare for disaster recovery to a remote site. We have 5 nodes indexer cluster and wanted to get the best backup and  strategy so I can create a daily backup that can be , in case of disaster, restored to a single server instance of Splunk enterprise to provide minimum functionality during main site unavailability. The strategy I am looking for should  be able to construct a single tar compressed file that consolidate all buckets from the 5-node indexer cluster and restore it to a single node server. is there a way to construct a single backup (no bucket replication) from an indexer cluster?