All Topics

Top

All Topics

Hi, Can the image show the status ? is it possible in Splunk? Like below It shows that the toe is up which means the threshold value has not been reached. if the toe is down it means that the thres... See more...
Hi, Can the image show the status ? is it possible in Splunk? Like below It shows that the toe is up which means the threshold value has not been reached. if the toe is down it means that the threshold value has been reached. can anyone help in this ?? thanks in advance.    
Hi there Im trying to filter my search results based on numerical top values of a field. For example. I have 5k events as customer purchases. I want a list of all customers, who have never paid mor... See more...
Hi there Im trying to filter my search results based on numerical top values of a field. For example. I have 5k events as customer purchases. I want a list of all customers, who have never paid more than 10. Or the other way, I want to filter out all customer events of customers, which have a purchase event with a value in a field higher than 10. Which command could I use? delete removes the events from the index. I only want to filter my search results. if i filter for "field<=10" I do only get the events that have purchases, which are smaller than 10, but the list contains customer, which have paid mor than 10 and only removed those specific events.  
I want to know the active user count of an application, the following is the query i created, however its not giving the out put, can someone guide please?   index=application host=Server Name sour... See more...
I want to know the active user count of an application, the following is the query i created, however its not giving the out put, can someone guide please?   index=application host=Server Name sourcetype="Applicationprod-console-logs" "[AccessLogFilter]" | rex "^\S+ \S+ \S+ \S+ \S+ (?<USER>\S+) (?<ACTION>\S+) (?<URL>\S+) (?<SIZE>\d+)kb" | timechart span=15m dc(USER) as Application _User I am getting the following out put: _time                                        application_User 1 2021-11-09 00:00:00        0 2 2021-11-09 00:15:00        0 3 2021-11-09 00:30:00        0 4 2021-11-09 00:45:00        0   The Logs if i run the following query shows that there is the User ID in the output: index=application host=Server Name sourcetype="Applicationprod-console-logs" "[AccessLogFilter]" Output: 2021-11-09 08:29:12,787 INFO [http-nio-127.0.0.1-8085-exec-101 url: /deploy/viewDeploymentProjectEnvironments.action] [AccessLogFilter] USERID GET application.url.action?id=665059902 4320055kb host = Server source = location = application-prod-console-logs
Hi Team, 1) I am searcing for APPAP100E cyber ark keyword error in splunk. we are not getting output .  2) I am searching for LicenseVerifier perticular word but not getting output. 3) I am searc... See more...
Hi Team, 1) I am searcing for APPAP100E cyber ark keyword error in splunk. we are not getting output .  2) I am searching for LicenseVerifier perticular word but not getting output. 3) I am searching for SSLHandshakeException error  but not getting event details output in splunk UI.4 please help & guide to resolve.   Thanks and regards
Good day, I am having a problem with generating a custom event via a POST request that I am doing. I suspect that the elements that I am using are not entirely correct though and just need some poi... See more...
Good day, I am having a problem with generating a custom event via a POST request that I am doing. I suspect that the elements that I am using are not entirely correct though and just need some pointers on what to tweak so that I can get it to work. POST URL: “https://appd.controller.urlXXX/controller/rest/applications/XXX/events?severity=ERROR&summary= Application Stopped_Test&eventtype=CUSTOM&customeventtype=D_Test&comment=Please_start_application” The response indicates that the event was successfully created: “Successfully created the event id:6665520987”, but when I check in AppD under the events dashboard, I see zero events. However, when I do not specify the event type and custom event, I am able to view the custom event in AppD. It seems that the “eventtype” and customeventtype properties are the problem, any Ideas? Regards, Dean
Hi Team, I am very new to Splunk Dashboard. Do you think we can form some SQL queries from the backend for any of the tasks below so that we can capture them via Splunk dashboard? 1. API Service U... See more...
Hi Team, I am very new to Splunk Dashboard. Do you think we can form some SQL queries from the backend for any of the tasks below so that we can capture them via Splunk dashboard? 1. API Service UP/ Down 2. Response Time by UI API 3. Frequent Users 4. CPU Utilization 5. Mem Utilization 6. Login Statistics 7. API Error Report Database we use: HDFS   Regards, Harish
Hi All, I have query which return all the events for two Hybris pods. When I am using stats it shows the number of events under each pod but when I try to check events for particular pods it shows 0... See more...
Hi All, I have query which return all the events for two Hybris pods. When I am using stats it shows the number of events under each pod but when I try to check events for particular pods it shows 0 events are there.  Attaching Snippet for better understanding. Please provide your suggestions.
I have a concern about indexer discovery. Will the forwarders learn the IP of the indexers, or will it learn the hostname/fqdn?  
I have a field 'Value'  in dashboard that is being validated by an external .js . When the validation match fails it shows the error message "Please input a valid Value". But when I ignore that mess... See more...
I have a field 'Value'  in dashboard that is being validated by an external .js . When the validation match fails it shows the error message "Please input a valid Value". But when I ignore that message and hit submit button, it anyways submits that record. I want the dashboard not to submit, until I input the field with correct values and no validation message comes. I have so many input fields , but I am validating only the 'Value' field. And I don't want the user to be able to submit record while ignoring the validation message. =================================================== <input type="text" token="value" id="tkn_value_id" searchWhenChanged="true" depends="$some_condition$"> <label>Value</label> ======================================== require([ 'underscore', 'splunkjs/mvc', 'jquery', "splunkjs/mvc/simplexml/ready!" ], function(_, mvc, $) { var tkn_value = splunkjs.mvc.Components.getInstance("tkn_value_id"); tkn_value.on("change", function(e) { console.log(e) // e.preventDefault(); if (!isValueValid(e)) { alert("Enter Valid Value") return false; } }) function isValueValid(userInput) { console.log(userInput) var res = userInput.match(/(^\s|\"|\s$)/g) if (res == null) return true; else return false; } ======================================================== })
Hello there. I was wondering... is there any way to generate _events_ in search? I mean, I know of the makeresults command of course but it generates stats results, not events per se. Is there any... See more...
Hello there. I was wondering... is there any way to generate _events_ in search? I mean, I know of the makeresults command of course but it generates stats results, not events per se. Is there any way to generate events search-time, to - for example - test parsing rules? Something like (pseudocode): | <generate_my_events> | eval _raw="blah blah" | eval source="syslog" | eval sourcetype="whatever:syslog"
Hi All, I have created a Tools button (as shown in the below screenshot). I need to add the Edit,Export and ... button(Highlighted in yellow) inside of the Tools button.     When Tools button... See more...
Hi All, I have created a Tools button (as shown in the below screenshot). I need to add the Edit,Export and ... button(Highlighted in yellow) inside of the Tools button.     When Tools button is clicked, the Edit,Export and ... should appear as a drop down under it. Please help with a jquery/XML for achieving this please.   Thank you!
Hi all, I'm planning an architecture with a redundant Heavy Forwarder and double Syslog collector Servers. Where do i place a Load Balancer? and how do these Components communicate in terms of Port... See more...
Hi all, I'm planning an architecture with a redundant Heavy Forwarder and double Syslog collector Servers. Where do i place a Load Balancer? and how do these Components communicate in terms of Ports and Firewalls? What do i need to plan?  I cant find the right places to read about this in the documentation. Thank you for your help in advance. Oj.
Hello, I am working on a Distributed environment with: - 1x SH with Splunk ES installed (Deployment Server) - 7x Indexers (Search Peers) On my SH, I see a lot of skipped executions on scheduled s... See more...
Hello, I am working on a Distributed environment with: - 1x SH with Splunk ES installed (Deployment Server) - 7x Indexers (Search Peers) On my SH, I see a lot of skipped executions on scheduled searches related to Splunk CIM app.  Specifically I see a 99% skip ratio to scheduled reports with a name format of: _ACCELERATE_DM_Splunk_SA_CIM_Splunk_CIM_Validation.[Datamode_Name]_ACCELERATE_ I accessed the Data Models page and expanded the CIM Validation (S.o.S) data model. The information I got is: "Access Count: 0 - Last Access: -) while size is 750MB and frequently updated. My question: Can I disable acceleration on this Data Model since it is never accessed? Thank you in advance. With kind regards, Chris
Hey! I have a html form. Can I call her in the alert to send a message? so that not just a message comes, but a message in the form of html
Previously, my heavy forwarder is working fine. Able to search from latest logs in my searchhead. But upon testing another app for another SIEM in the heavy forwarder, it has been routing to there si... See more...
Previously, my heavy forwarder is working fine. Able to search from latest logs in my searchhead. But upon testing another app for another SIEM in the heavy forwarder, it has been routing to there since. But after the POC ended, we want to switch back to sending it back to our splunk indexer.  We remove the app for the SIEM and left with our outputs for this forwarder which is towards the splunk indexer IP. I tried restarting the splunk service in this heavy forwarder but still unable to search those host in the searchhead.  Is there anything to look out for? 
I tried to upgrade Python for Scientific Computing to v5.3 on my cluster.  I followed the instructions in here and first un-tar the add-on on my master node. However, whenever I ran /opt/splunk/bin/... See more...
I tried to upgrade Python for Scientific Computing to v5.3 on my cluster.  I followed the instructions in here and first un-tar the add-on on my master node. However, whenever I ran /opt/splunk/bin/splunk apply shcluster-bundle, I will always get this error. Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=Splunk_SA_Scientific_Python_linux_x86_64 on target=https://172.18.109.2:8089: Network-layer error: Broken pipe This kind of error didn't happen when I installed another smaller add-on earlier this morning.  And after failing with that error multiple times, I noticed the doc says: If you attempt to push a very large tarball (>200 MB), the operation might fail due to various timeouts. Delete some of the contents from the tarball's app, if possible, and try again. Since the original python-for-scientific-computing-for-linux-64-bit_300.tgz is already 480MB, and the size of the un-tar and  unzipped /opt/splunk/etc/shcluster/apps/Splunk_SA_Scientific_Python_linux_x86_64 folder on the master node is 2.5G!  I bet the large tarball size is the problem. But how can I solve this problem?  What contents from the tarball's app can I delete?
How does DMC calculate load average?  I understand the number comes from rest api. But it does not explain how exactly the number is calculated. And the number is quite different from Linux Load Av... See more...
How does DMC calculate load average?  I understand the number comes from rest api. But it does not explain how exactly the number is calculated. And the number is quite different from Linux Load Average when you use uptime command.   Thanks. Hanny.
We have Splunk 8.0.3 deployed to a private AWS cloud. We use AWS i3.8xlarge instance types for our indexers, recently upgraded from i3.4xlarge. We combine the 1.7TB "ephemeral" volumes into a logic... See more...
We have Splunk 8.0.3 deployed to a private AWS cloud. We use AWS i3.8xlarge instance types for our indexers, recently upgraded from i3.4xlarge. We combine the 1.7TB "ephemeral" volumes into a logical volume group and use them for splunk index buckets mounted on /opt/splunk/var/lib/splunk. When we were running on i3.4xlarge instances where we had two 1.7 TB volumes, we were using 3 TB of the 3.4 TB logical volume group per indexer as Splunk  indexes. When we upgraded to i3.8xlarges we removed the old indexers and the new indexers are only using 200GB of the 6.8TB logical volume groups, slowly creeping up about 4GB/hour. I have tried running searches over long periods of time, but they fail with: ! DAG Execution Exception: Search has been cancelled ! Search auto-canceled ! The search job has failed due to an error.  You may be able view the job in the Job Inspector How do I get the cache volumes to fill up again quickly with index data from the S3 storage so my searches will be fast and complete again?  
One of th requirement of Multisite indexer cluster with SmartStore is   Site locations host two object stores in an active-active replicated relationship. Depending on the deployment type, the set ... See more...
One of th requirement of Multisite indexer cluster with SmartStore is   Site locations host two object stores in an active-active replicated relationship. Depending on the deployment type, the set of cluster peer nodes can be sending data to one or both object stores.   To fulfill this requirement, is there some configuration I have to do in Splunk or in S3 bucket ?
Hello All,  Anyone know how I can get the latest date from a lookup file? I am using the script below: | inputlookup append=t Blue_Marbles_Report.csv | rename "Last Scan Date" as "Last_Scan_Date"... See more...
Hello All,  Anyone know how I can get the latest date from a lookup file? I am using the script below: | inputlookup append=t Blue_Marbles_Report.csv | rename "Last Scan Date" as "Last_Scan_Date" | eval updated=strptime(Last_Scan_Date,"%FT%T%:z") | eval desired_time=strftime(updated, "%B %d, %Y") | stats latest(desired_time) as desired_time | table Marbles, desired_time But the latest(desired_time) does not deliver any results. This is what I have on my original file: Marbles Last_Scan_Date Blue 08/01/2020 Blue 10/04/2020 Blue 11/08/2021 Desired Result: Marbles desired_time Blue 11/08/2021 Hope to get some help on this, thanks in advance.