All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want a percentage symbol in the radial gauge chart how to add it, I  tried from XML code also while adding the format option it showed a node not found error. could you please help me with this ... See more...
I want a percentage symbol in the radial gauge chart how to add it, I  tried from XML code also while adding the format option it showed a node not found error. could you please help me with this Add the percentage symbol after 19% table percent_pass</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.chart">radialGauge</option> <option name="charting.chart.rangeValues">[0,30,70,100]</option> <option name="charting.chart.style">minimal</option> <option name="charting.gaugeColors">["0xdc4e41","0xf8be34","0x53a051"]</option> <option name="charting.legend.placement">right</option> <option name="height">200</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel>
Can anyone offer some guidance on how to go about creating a query that pulls the following fields from each event  Start_time (date and time ) — different from _time field  End_time (date and ti... See more...
Can anyone offer some guidance on how to go about creating a query that pulls the following fields from each event  Start_time (date and time ) — different from _time field  End_time (date and time ) — different from _time field  usage_amount ( a whole number)      I will like to calculate the time difference between the start and end time and split every event that the start and end time span over one day and split the original event into multiple individual events where each event from the search just returns the modified list of events where the start and end times are within the same day 
Hi all, I have a table where I would like to transpose only one column with values from another column. It looks like this.. Order Date Count Shift M5678 01/01/2023 12 A ... See more...
Hi all, I have a table where I would like to transpose only one column with values from another column. It looks like this.. Order Date Count Shift M5678 01/01/2023 12 A M5678 01/01/2023 13 B M1234 01/01/2023 13 A M1234 01/01/2023 15 B   And I would like to achieve this.. Order Date A B M5678 01/01/2023 12 13 M1234 01/01/2023 13 15   Can someone please help with this. Thank you so much
My org is moving away from PD to opsgenie, the documentation for opsgenie makes setup seem fairly quick.  After doing the suggested steps and installed the app and added the api key and selected the ... See more...
My org is moving away from PD to opsgenie, the documentation for opsgenie makes setup seem fairly quick.  After doing the suggested steps and installed the app and added the api key and selected the correct region and then going over the internal index looking for opsgenie I keep seeing these errors: WARN sendmodalert [24146 AlertNotifierWorker-0] - action=opsgenie - Alert action script returned error code=3  INFO sendmodalert [24146 AlertNotifierWorker-0] - action=opsgenie - Alert action script completed in duration=553 ms with exit code=3  ERROR sendmodalert [24146 AlertNotifierWorker-0] - action=opsgenie STDERR - Unexpected error: No credentials found. Could not get Opsgenie API Key. The app holds the api key hashed in the local dir under the apps main dir so i know its saving the API key.  Has anyone else run into this? I cant seem to get it to work no matter what I do.  I have added the correct categories (list_key_storage) to the user roles and doesn't make a difference.  any help would be appreciated!     Thanks! 
For the below table, whenever a comparison_result column value is equal to "not equal", it should copy the corresponding whole row value and insert before that row by changing curr_row value alone to... See more...
For the below table, whenever a comparison_result column value is equal to "not equal", it should copy the corresponding whole row value and insert before that row by changing curr_row value alone to "Turn on". _time ID curr_row comparison_result 2015-02-16T03:24:57.182+05:30 19 Turn on equal 2015-02-16T03:24:58.869+05:30 19 1245 equal 2015-02-16T03:25:09.179+05:30 19 1245 equal 2015-02-16T03:25:12.394+05:30 19 1245 equal 2015-02-16T03:25:24.571+05:30 19 1245 equal 2015-02-16T05:30:41.956+05:30 19 1245 equal 2015-02-16T06:02:36.635+05:30 19 1245 equal 2015-02-16T06:23:23.446+05:30 20 Turn on not equal 2015-02-16T06:23:24.608+05:30 20 7656 equal 2015-02-16T06:40:46.619+05:30 20 7690 not equal 2015-02-16T06:46:59.594+05:30 20 8783 equal
Hi, I've recently started using Splunk logs. I have a query to fetch client IDs who call my APIs. These client IDs are some UUIDs. I would rather like to see a customized name for these IDs.  For ex... See more...
Hi, I've recently started using Splunk logs. I have a query to fetch client IDs who call my APIs. These client IDs are some UUIDs. I would rather like to see a customized name for these IDs.  For example, I can save the mapping of the client ID and its easy-to-read client name in a CSV or somewhere and want my Splunk query to show the client name. Is this possible? Could someone help how to do it?
Hi I have recently signed up to a free trial to use Splunk Cloud. When I accessed my instance it was asking for a username and password. After looking at similar situations, I realised other users ha... See more...
Hi I have recently signed up to a free trial to use Splunk Cloud. When I accessed my instance it was asking for a username and password. After looking at similar situations, I realised other users have received an email to access their instance. I have only 13 days left on this trial and I have not received a single email with these details. I have checked both my spam and my trash, and i can't see this email. Why have I not received this email yet?
Our server is forwarding events for us and includes some extra fields at the beginning of each event. One of those fields is the timezone offset of the server.  So the event might look like:  dom... See more...
Our server is forwarding events for us and includes some extra fields at the beginning of each event. One of those fields is the timezone offset of the server.  So the event might look like:  domain,hostname,timezone,path,log_message Where the log_message contains a  timestamp but the timestamp can be in different locations in the log_message and the timestamp can have different formats. The timestamp does not include timezone information.  Splunk does a good job of finding the timestamps and creating the _time to match, but I can't figure out how to apply the timezone field.  I really want to have in my props.conf to have TZ= reference the timezone field from the events: TZ=timezone but that doesn't seem to work. 
Hi Folks, I am trying to backlist the gz files in input.conf. But somehow the blacklist doesn't work properly. Files to blacklist:  /var/log/abc.log-20200512.gz /var/log/abc.log-202... See more...
Hi Folks, I am trying to backlist the gz files in input.conf. But somehow the blacklist doesn't work properly. Files to blacklist:  /var/log/abc.log-20200512.gz /var/log/abc.log-20200510.gz /var/log/messages-20200319.gz I tried this.  [monitor:///var/log/* crcSalt=<SOURCE> blacklist1=\.gz$ But this did not work for some of the files that is mentioned above. Please help with the correct way to blacklist the .gz file.
I want to ignore few keyword contained events at forwarder level NOT at indexer.  Below are the sample log: to Ignore if it finds "INFO OR WARN" messages in log file. what it would be the file name... See more...
I want to ignore few keyword contained events at forwarder level NOT at indexer.  Below are the sample log: to Ignore if it finds "INFO OR WARN" messages in log file. what it would be the file name & config  in UF to tell UF to ignore those events. Please let me know the settings in UF level to ignore. Ignore Contained "INFO/WARN" 2023-05-10 14:32:44,843 org:usbank-prod env:usb-prod PKI:YYY-system-Onull-v01 rev:2999 messageid:37166-20 policy:OAuthV2.VerifyKey giopudded-Main-0 INFO STEPDEFINITIONS 2023-05-10 14:32:44,843 org:usbank-prod env:usb-prod PKI:YYY-system-Onull-v01 rev:2999 messageid:37166-20 policy:OAuthV2.VerifyKey giopudded-Main-0 WARN STEPDEFINITIONS Index only Error:- 2023-05-10 14:32:44,843 org:usbank-prod env:usb-prod PKI:YYY-system-Onull-v01 rev:2999 messageid:37166-20 policy:OAuthV2.VerifyKey giopudded-Main-0 ERROR STEPDEFINITIONS
Hi, I am onboarding the /var/log/secure path and i am getting the bellow about offset  INFO WatchedFile /path/to/file.log Will begin reading at offset=253 for file Just wondered what I coul... See more...
Hi, I am onboarding the /var/log/secure path and i am getting the bellow about offset  INFO WatchedFile /path/to/file.log Will begin reading at offset=253 for file Just wondered what I could do to resolve this?   Thanks, Joe
Hi Team,   I am collecting metrics using API calls for every 5 minutes , but all the metrics are coming as a single event as attached screen shot   I need to break these events as individuals ... See more...
Hi Team,   I am collecting metrics using API calls for every 5 minutes , but all the metrics are coming as a single event as attached screen shot   I need to break these events as individuals (which ever events starting from text “confluent_kafka_”) . I have edited my props.conf as below but its not coming as expected still its coming as a single event. Can some one please guide me how to do it.     [source::kafka_metrics://kafka_metrics] LINE_BREAKER = (confluent_kafka_)(\s) SHOULD_LINEMERGE = false
I have a user that had created a private data model. The user has left the organization but had created a dashboard that used this data model without setting appropriate permissions and so other user... See more...
I have a user that had created a private data model. The user has left the organization but had created a dashboard that used this data model without setting appropriate permissions and so other users get an error when the dashboard tries to access that specific data model. That all makes sense. However, as a user with admin role, I was not able to see this data model at all (via All Configurations or otherwise). I confirmed that it existed only via the file system (grep on user .conf files) and was then able to create a temporary local user account for the departed user in order to access their knowledge objects and set permissions as needed. Should admin not be able to see and act on private data models? The admin role does have the admin_all_objects privilege as per the default. Splunk Enterprise 8.2.x Thanks in advance.
Hello, We are trying to renew our 3rd Party SSL certificate for our Splunk Web service but we are unsure on the steps. We are using a Windows Server and it looks like there are various folders with... See more...
Hello, We are trying to renew our 3rd Party SSL certificate for our Splunk Web service but we are unsure on the steps. We are using a Windows Server and it looks like there are various folders with certs in them: \splunk\etc\auth \splunk\etc\auth\customwebcert \splunk\etc\auth\splunkweb Each folder has a .pem and .key  I tried replacing each file with the new key and pem that we generated using OpenSSL and I also tried updating the web.config file and it still didn't work. Any one have any ideas what we need to do to replace the cert or where the setting is that points to the location of the certificate?
Hi, In the logs file, we are capturing java error is multiple entries, so in order for me to see the entire error set, I need to see the events/records (10 used here as an example) that are immedia... See more...
Hi, In the logs file, we are capturing java error is multiple entries, so in order for me to see the entire error set, I need to see the events/records (10 used here as an example) that are immediately prior-to and post the keyword that is being search.   Currently, when I use the below SPL, I get only the events that contain the word "java" which is good, but I want to see the 10 records (i.e. log entry lines) prior to this "java" record and 10 entries post this "java" record".  The records prior-to and post may not have any keyword "java" in them, but I still want to see those records as part of the result set being displayed.   | from datamodel:"xyz" | fields host source _time | where like(_raw,"%java%") | table host source _raw   Is there a way to display the 10 records/events prior-to and post the keyword being searched from the _raw field? Thanks
I have a field as follows in the logs  user="userAbc1 (host1234)" As you can see both the username and hostname fields are together in the user field. Now how do I apply regex and separate both ... See more...
I have a field as follows in the logs  user="userAbc1 (host1234)" As you can see both the username and hostname fields are together in the user field. Now how do I apply regex and separate both the fields into 2 corresponding fields as follows  user=userAbc1 host=host1234
Hi Team, I am collecting metrics using API calls for every 5 minutes , but all the metrics are coming as a single event as below for every 5 minutes.   I have attadhed the screen shot here.   ... See more...
Hi Team, I am collecting metrics using API calls for every 5 minutes , but all the metrics are coming as a single event as below for every 5 minutes.   I have attadhed the screen shot here.   confluent_kafka_server_request_bytes{kafka_id="tythtyt",principal_id="sa-r29997",type="Fetch",} 2092668.0 1683872880000 confluent_kafka_server_memory{kafka_id="yyyy",topic="host002.json.cs.tt.gg",} 0.0 1683872880000   I need to break these events as individuals (which ever events starting from text “confluent_kafka_”) . I have edited my props.conf as below but its not coming as expected still its coming as a single event. Can some one please guide me how to do it.   [source::kafka_metrics://kafka_metrics] LINE_BREAKER = (confluent_kafka_)(\s) SHOULD_LINEMERGE = false    
Hi Team, Recently, I have configured splunk in my project to monitoring the application logs. I could find there is some log count mismatch between log file in server and event count in splunk logs.... See more...
Hi Team, Recently, I have configured splunk in my project to monitoring the application logs. I could find there is some log count mismatch between log file in server and event count in splunk logs. it is not happening in all time only some times like 2 or 3 times in a month then remaining days the event count is matching with log file count in server.  Could you please share suggestion to troubleshoot the issue. Splunk enterprise licensed version: 9.0.3 server kernel: Linux red hat Universal forwarder version: 9.0.3 server kernel: Linux red hat Example: Log file size is 500MB and total log count in log file is  1520713 and total event count in splunk after indexing is 1520794 which is higher than the server log file.  logs count in application log file = 1520713  event count in splunk search = 1520794  which is higher than actual log file.  I have verified the splunkd logs and there is no error. verified limits conf and props ocnf as well and there is no specific config related to it.  index conf: [monitor:///app/log/audit.log] index = xxxx disabled = false ignoreOlderThan = 7d recursive = false limits.conf: [thruput] maxKBps = 512  
We have the following alert to check if the CPU is >=85 and alert us for some reason its not working, it worked till 14th April 2023 but not after that index=index host=12345 sourcetype="PerfmonMk:C... See more...
We have the following alert to check if the CPU is >=85 and alert us for some reason its not working, it worked till 14th April 2023 but not after that index=index host=12345 sourcetype="PerfmonMk:CPU" | stats avg(cpu_load_percent) as CPUUSAGE by host | where CPUUSAGE >= 85   Our Data is listed as below:   4/30/23 11:59:56.000 PM   0 15.797067520866204 7.498591389607462 8.27969465935824 1842.8858123299901 0 0 10.299361837763916 0 82.45220035416348 3.466196874917047 78.98600347924642 0 89.49445480387092 1437.5109298999423 0 %_Processor_Time = 15.797067520866204 cpu_load_percent = 15.797067520866204 host = 12345 source = PerfmonMk:CPU sourcetype = PerfmonMk:CPU   4/30/23 11:59:56.000 PM   1 10.32934463261076 5.311502234305285 4.999060926404974 1399.9132595018916 0 0 52.3967534270708 0 88.2533286122202 3.1865844001204375 85.06674421209975 0 102.49364935638849 847.3474972156449 0 %_Processor_Time = 10.32934463261076 cpu_load_percent = 10.32934463261076 host = 1234 source = PerfmonMk:CPU sourcetype = PerfmonMk:CPU   4/30/23 11:59:56.000 PM   2 7.673593515458121 2.6557511171526427 4.999060926404974 1328.2177018545447 0 0 6.599591080508917 0 90.14091802854833 2.2141230769799893 87.92679495156834 0 45.59717473806161 910.1436062847298 0 %_Processor_Time = 7.673593515458121 cpu_load_percent = 7.673593515458121 host = 1234 source = PerfmonMk:CPU sourcetype = PerfmonMk:CPU
Hi, Could any one provide me with the search to get the list of sourcetypes associated with all index in my splunk cloud.