All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've been searching splunk answers all morning trying to get this one. It seems simple enough, but I can't lick it and I'm just spinning my wheels. I'm trying to get a percentage uptime based on t... See more...
I've been searching splunk answers all morning trying to get this one. It seems simple enough, but I can't lick it and I'm just spinning my wheels. I'm trying to get a percentage uptime based on the TA_nix ps sourcetype. The rub is that it's for a two node cluster, so when one host is down and the other one is still up then the cluster as a whole is still up, and that's what they want.. Also the search I am running is sometimes providing results greater than 100% even when I break it down by Node1 and Node2. I'm counting on ps to poll 1 result per minute for this process. Here's my search, and a sample set of results so you can see what I'm working with. index="os" sourcetype="ps" USER=processuser COMMAND="commandiwanttocheck" (host=homehostsh OR host=someotherhosts) | lookup serverinfo_lookup hostname AS host OUTPUTNEW ServerType ClusterNode | stats count(COMMAND) as TotalResponses max(_time) as last_time min(_time) as first_time by ClusterNode ServerType | eval minutes=((last_time-first_time)/60) | eval Percent=round(((TotalResponses)/minutes)*100,2) The result of the search is this. I've still got my "working" fields in there ClusterNode ServerType TotalResponses last_time first_time Percent minutes Node1 API_High 240 1585850077 1585835725 100.33 239.2 Node2 API_High 240 1585850069 1585835718 100.34 239.1833333 Node1 API_Low 240 1585850099 1585835749 100.35 239.1666667 Node2 API_Low 240 1585850060 1585835704 100.31 239.2666667 Node1 Batch_High 240 1585850067 1585835717 100.35 239.1666667 Node2 Batch_High 240 1585850078 1585835723 100.31 239.25 Node1 Batch_Low 240 1585850085 1585835732 100.33 239.2166667 Node2 Batch_Low 240 1585850070 1585835717 100.33 239.2166667 Node1 DMZ 240 1585850051 1585835702 100.36 239.15 Node2 DMZ 240 1585850084 1585835732 100.33 239.2 Node1 Internal 240 1585850079 1585835727 100.33 239.2 Node2 Internal 239 1585850042 1585835752 100.35 238.1666667
I've been struggling with various ways of getting a single value display that I can use a different field to the actual value I want to display as the threshold for colouring. This add-on does exactl... See more...
I've been struggling with various ways of getting a single value display that I can use a different field to the actual value I want to display as the threshold for colouring. This add-on does exactly that job, but I need to display a set of values on a trellis, and then drill down to another dashboard using the description / text title for each of the trellis boxes as a token to pass along. Seems that this add-on doesn't support tokens at the moment, or am I missing something?
How can I use the splunkjs "Service" class to make POST changes to .conf files via the REST API in a Splunk SimpleXML JavaScript Dashboard (embedded via <dashboard script="myscript.js">)? I've tri... See more...
How can I use the splunkjs "Service" class to make POST changes to .conf files via the REST API in a Splunk SimpleXML JavaScript Dashboard (embedded via <dashboard script="myscript.js">)? I've tried to use the documentation, however it is unclear to me, if I can use this class in javascript in a SimpleXML dashboard and how. https://docs.splunk.com/DocumentationStatic/JavaScriptSDK/1.0/splunkjs.Service.Endpoint.html#splunkjs.Service.Endpoint-post Previously I've used SearchManager objects to query the rest API via the "| rest" command, but apparently this command only allows GET requests and not POST. This is not suitable for me as I want to CHANGE a config file directly via a javascript call to the REST API, without a workaround with a custom python search command or elsewhat. require([ 'jquery', 'underscore', 'splunkjs/http', 'splunkjs/service', 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/simplexml/ready!' ], function($, _, Http, Service, mvc, SearchManager, TableView) { var endpoint = new splunkjs.Service.Endpoint(service, "search/jobs/12345"); endpoint.post("control", {action: "cancel"}, function() { console.log("CANCELLED"))}); });
Hello, I have two servers both of which are in the same deployment class on my server. How can I do different sources per host when the moinotred file is the same? E.G Server 1: HOSTNAME =... See more...
Hello, I have two servers both of which are in the same deployment class on my server. How can I do different sources per host when the moinotred file is the same? E.G Server 1: HOSTNAME = UKWEBDEV01 C:\FileToMonitor\ABCDE\1234.Log C:\FileToMonitor\ABCDE\1234.Log C:\FileToMonitor\ABCDE\1234.Log Source = InsitePortalLogs Sourcetype = IIS Server 2: UKWEBDEV05 C:\FileToMonitor\ABCDE\1234.Log C:\FileToMonitor\ABCDE\1234.Log C:\FileToMonitor\ABCDE\1234.Log Source = PlatformPortalLogs Sourcetype = IIS Thank you! JG
We have an XML document import into Splunk.
I am reading different logs from same source folder. But not all files are getting read, one stanza works other don't. If i restart the UF, all stanzas work, but changed data is not capturing by o... See more...
I am reading different logs from same source folder. But not all files are getting read, one stanza works other don't. If i restart the UF, all stanzas work, but changed data is not capturing by one stanza. files i am planning to monitor are performance_data.log performance_data.log.1 performance_data.log.2 performance_data.log.3 performance.log performance.log.1 performance.log.2 SystemOut.log my input.conf file and [default] host = LOCALHOST [monitor://E:\Data\AppServer\A1\performance_data.lo*] source=applogs sourcetype=data_log index=my_apps [monitor://E:\Data\AppServer\A1\performance.lo*] source=applogs sourcetype=perf_log index=my_apps [monitor://E:\Data\logs\ImpaCT_A1\SystemOu*] source=applogs sourcetype=systemout_log index=my_apps \performance_data.lo* and \SystemOu* stanzas working fine, but performance.lo* stanza not working. only sends data when i restart the UF. Anything i am doing wrong here ?
Hi I was creating sample alert on index=_internal|stats count by host for last 15min alert to be triggered on custom condition- search count > 0 But I am unable to see any alert triggered. ... See more...
Hi I was creating sample alert on index=_internal|stats count by host for last 15min alert to be triggered on custom condition- search count > 0 But I am unable to see any alert triggered. Is there anything I am doing wrong? Trigger Action-Add to triggered alerts
I have field username - they show up as username=mike and in some cases username=mike. with a dot in the end. How do I remove the dot from end? This is messing my stats values(xyz) by username.
We migrated the MSCS TA to a new HF and are receiving authentication errors even though we're using the same client id, secret, and tenant id. I'm thinking its likely something network related and th... See more...
We migrated the MSCS TA to a new HF and are receiving authentication errors even though we're using the same client id, secret, and tenant id. I'm thinking its likely something network related and the auth error is misleading. I'm trying to find out exactly what URL(s) I need to whitelist - I cant find anything in the docs. Here is the error: "REST Error [400]: Bad Request -- Account authentication failed. Please check your credentials and try again". Like I said, I am using the same credentials, just a different host. Thanks.
I was testing the curl script in WebTools app in Splunk and it gives me an error. Could you let me know, what could be the issue. The actual curl script that works is: curl -X GET -H 'apiKey: 2... See more...
I was testing the curl script in WebTools app in Splunk and it gives me an error. Could you let me know, what could be the issue. The actual curl script that works is: curl -X GET -H 'apiKey: 232040b2-22ac-11ea-978f-2e728444444' http://xxx.com/token/auth Below is the translated one in webtool, which is not working, stating the header is not being passed correctly: | eval header = "{\"apiKey\":\"232040b2-22ac-11ea-978f-2e728ce8 444444\"}" | curl method=get http://xxx.com/token/auth headerfield=header | table curl* Anything i am doing wrong here. Please suggest.
Hi All, I have some of the messages being truncated in Splunk though all other similar messages are parsing perfectly. There is no error of Truncation in the logs. Messages are of 400 lines/event.... See more...
Hi All, I have some of the messages being truncated in Splunk though all other similar messages are parsing perfectly. There is no error of Truncation in the logs. Messages are of 400 lines/event. Props.conf(Indexer) TRUNCATE = 0 MAX_EVENTS = 1000 Splunkd.log(Indexer) Error- 04-02-2020 14:48:57.391 +0200 WARN DateParserVerbose - Time parsed (Sun Sep 23 12:09:23 2012) is too far away from the previous event's time (Thu Apr 2 14:48:56 2020) to be accepted. If this is a correct time, MAX_DIFF_SECS_AGO (3600) or MAX_DIFF_SECS_HENCE (604800) may be overly restrictive. Context: source::/APPLICATIONS/WebSphere/Logs/xxx/xxxEndpointCLONE2/xxxxENDPOINT_Splunk-messages.log|host::xxxx|ip_messages_sourcetype|680577 04-02-2020 14:56:40.255 +0200 WARN DateParserVerbose - A possible timestamp match (Sun Sep 23 12:09:23 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source::/APPLICATIONS/WebSphere/Logs/rxxx/xxxxEndpointCLONE2/ xxxxENDPOINT_Splunk-messages.log|host::xxx|ip_messages_sourcetype|680623 04-02-2020 14:56:40.255 +0200 WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event (Thu Apr 2 14:56:39 2020). Context: source::/APPLICATIONS/WebSphere/Logs/xxx2xxx/xxxEndpointCLONE2/xxxxENDPOINT_Splunk-messages.log|host::xxxx|ip_messages_sourcetype|680623 Best Regards,
Hi, We need to provide report, where we need to capture how long Splunk instance was down in past. Is it possible to capture using internal logs? What Splunk query can we use to get the duration?... See more...
Hi, We need to provide report, where we need to capture how long Splunk instance was down in past. Is it possible to capture using internal logs? What Splunk query can we use to get the duration? Note: Currently Splunk instances are up and running.
-How does this app bring in data? I know the script pulls in the data but is there anything I need to do to have the script run? I see there are two sourcetypes being used, one in the inputs.co... See more...
-How does this app bring in data? I know the script pulls in the data but is there anything I need to do to have the script run? I see there are two sourcetypes being used, one in the inputs.conf and one in the props.conf, so on Sunday will it download the files and use the sourcetype in the props.conf? I have downloaded the app, do I just wait until Sunday to have the data ingest? -After changing the directory in the script to where the files will be downloaded, as well as changing the interval (cron schedule) in the inputs.conf. I am not seeing that data downloaded to my directory. Am I missing something?
Hi, I have a requirement for our project where in splunk container has to deployed in openshift 4.3 and integrate it with openshift 4.3 so that all the logs can be seen in splunk. I have 2 que... See more...
Hi, I have a requirement for our project where in splunk container has to deployed in openshift 4.3 and integrate it with openshift 4.3 so that all the logs can be seen in splunk. I have 2 queries 1. regarding deployment of splunk on openshift 4.3 2. Integration of splunk so that all the logs of openshift will be forwarded to splunk. 1.I pulled the docker image of splunk in standalone instance of aws cloud and it is running fine and able to login When I deploy the same image in the openshift it is throwing an error : i even configured SPLUNK_HOME as /opt/splunk below is the error sh: /opt/container_artifact/splunk-container.state: Permission denied In the debug terminal also I cannot find a file splunk-container.state in /opt/container_artifact/ folder and In opt/splunk/etc also their is not splunk-launch.config. Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong What has to be done to bypass this error and deploy the splunk? Is there any documentation available on splunk side on the steps to be carried to integrate splunk in openshift 4.3? Thanks, SRK
Hello , my question is about how to make a condition based on result of request serach ( single value ) [if that value > X ] then obtain another Dashboard on the same principal Dashboard . I ... See more...
Hello , my question is about how to make a condition based on result of request serach ( single value ) [if that value > X ] then obtain another Dashboard on the same principal Dashboard . I need your help please ! Thanks in advance .
Hi, As a license manager I would like insight in the licenses and support services that are contracted. For example to be able to renew them in time. I already created an account. But of course thi... See more...
Hi, As a license manager I would like insight in the licenses and support services that are contracted. For example to be able to renew them in time. I already created an account. But of course this is not linked to purchases made already. Who can fix this for me? I cannot create a support ticket because I probably have no active support contract linked. So I hope someone can help me this way. Please email me to get details. Paul
Hi Awesome People, We are making a Splunk App for one of our products and the goal is to display the stats collected from that product's usage to the customer using that in the form of pretty dash... See more...
Hi Awesome People, We are making a Splunk App for one of our products and the goal is to display the stats collected from that product's usage to the customer using that in the form of pretty dashboards. We have exposed all of those stats as REST APIs which can be used from anywhere with an API key authentication. So far so good. Now here's a decision I cannot make and need your help in deciding. Which is the preferred method of achieving the above? 1- Use a modular input to poll our APIs and index the results in Splunk and then simply make use of Splunk's query language to get the stats from the indexed data. 2- Create custom search commands that communicate to our REST APIs and then use these custom commands in dashboards to render the data. I don't have much experience with using Splunk so I don't know which one of the above options is less complex in terms of time, memory, storage. So, please guide me on which method should I better use? Thanking you all for reading my query and helping me out in any way. Regards, Umair
Hello , my question is about how to make a condition based on result of request serach ( single value ) [if that value > X ] then obtain another Dashboard on the same principal Dashboard . I ... See more...
Hello , my question is about how to make a condition based on result of request serach ( single value ) [if that value > X ] then obtain another Dashboard on the same principal Dashboard . I need your help please ! Thanks in advance .
I indexed data from a local directory. All of them are Web Access Logs so I set the sourcetype to access_combined. As I run my search (I'm only searching for the source), I get 0 events in return. ... See more...
I indexed data from a local directory. All of them are Web Access Logs so I set the sourcetype to access_combined. As I run my search (I'm only searching for the source), I get 0 events in return. I once had the same source indexed but deleted it. I deleted the indexed data and the data source. The data input recognizes the 9 files that the directory contains but the search doesn't find events. I don't know why and have no idea what to do. I have already restarted Splunk and deleted the indexed data and the data input but nothing works.
Situation: I have a panel. The panel creates a token for me from a field I extract from the search. In the same panel I create a URL that enables me to "drilldown". Open T... See more...
Situation: I have a panel. The panel creates a token for me from a field I extract from the search. In the same panel I create a URL that enables me to "drilldown". Open Tickets search... | table "ticket_num", "Creation Date", "other_field", "other_field" $field3.earliest$ $field3.latest$ $result.ticket_num$ 5 none row true <eval>replace($row.url$, "http://", ""</eval> <link target="_blank"> <![CDATA[ https://other.tool/$ticket_num$ ]]> </link> </drilldown> </table> </panel> My table appears to work nicely where each row represents unique open tickets. Problem: All rows, upon clicking, "drilldown" only to the first row's "ticket_num". Which I think makes sense. Question: Can you pleas share if its possible with ES simple XML and without any other add-on to modify the panel so that, upon clicking a row, I "drilldown" to the respective row's unique "ticket_num"?