All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi I  use a "link to the search" drilldown from a table panel  When I have a look to my xml, I have a lot of special characters     <drilldown> <link target="_blank">search?q=%60i... See more...
hi I  use a "link to the search" drilldown from a table panel  When I have a look to my xml, I have a lot of special characters     <drilldown> <link target="_blank">search?q=%60index_mesuresc%60%20sourcetype%3D%22ez%3Acitrix%22&amp;earliest=&amp;latest=</link> </drilldown>      as far as I know, we can use cdata to correct this? so I dont know how to use cdata tag for not displaying characters I have tried this but it doesnt works     <drilldown> <link target="_blank"><![CDATA[search?q=%60index_mesuresc%60%20sourcetype%3D%22ez%3Acitrix%22&amp;earliest=&amp;latest=]]></link> </drilldown>     Could you help please?
As I wrote few times already I have in my care a relatively strange environment - a quite big installation with RF=1. Yes, I know I don't have data resilence and high availability - the customers kne... See more...
As I wrote few times already I have in my care a relatively strange environment - a quite big installation with RF=1. Yes, I know I don't have data resilence and high availability - the customers knew it and accepted at the start of the project. But since we're approaching the upgrade and as I'm reading the upgrade instructions, some questions pop up. The normal procedure includes rolling upgrade of cluster member nodes. The rolling upgrade starts with splunk upgrade-init cluster-peers and ends with splunk upgrade-finalize cluster-peers (or proper calls to REST endpoints). Question is - what does those two commands really do and how it affects the RF=1 situation? As I asked before - it's pointless to put my cluster in maintenance mode and there is no bucket rebalancing after offline/online because there is nothing to rebalance. So do I have to bother with all this or can I simply take the indexers down one by one, upgrade and start them up again? Yes, I know I won't have full search capacity during the indexer's downtime - it's obvious that if the data is not there I can't search it and my searches would be incomplete. The customers knows it and we'll schedule a "partial downtime". What do you think?
Hi all, Does anyone know if it's possible to use Cyberark to rotate the Splunk SOAR admin account password? If it is, do you have any pointers to get this solution implemented? As always, any h... See more...
Hi all, Does anyone know if it's possible to use Cyberark to rotate the Splunk SOAR admin account password? If it is, do you have any pointers to get this solution implemented? As always, any help is most gratefully received, Mark.
We would like to monitor Spring Boots HikariCP Connection Pool using AppDynamics. We saw an possibility in doing so using JMX MBeans, but can't get it to work. The MBean in JConsole looks as follows:... See more...
We would like to monitor Spring Boots HikariCP Connection Pool using AppDynamics. We saw an possibility in doing so using JMX MBeans, but can't get it to work. The MBean in JConsole looks as follows:  And the JMX Metric Rule looks as follows: Did I make mistake in the configuration? Is there another way to monitor the connection pool?
Hi, I have this search:     | spath | rename object.* as * | spath path=events{} output=events | stats by timestamp, events, application, event_type, account_id, context.display_name, | mvexpand ... See more...
Hi, I have this search:     | spath | rename object.* as * | spath path=events{} output=events | stats by timestamp, events, application, event_type, account_id, context.display_name, | mvexpand events | eval _raw=events | kv | table timestamp, payload.rule_description, "context.display_name", account_id, "event_type", "application", "payload.rule_url" | rename account_id as "Account ID", timestamp as "Timestamp", context.display_name as "System", context.host_url as "Host URL", event_type as "Event Type", "title" as "Title", "application" as "Application", "payload.rule_url" as "URL"       I have a json with multiple `events,  inside this event  I have "payload.rule_description", but, some record, doesn't have this "payload.rule_description" object, so, I don't have the "payload.rule_description". How can I check if the record has the "payload.rule_description" if not, brings `event_type`  instead? Tried to use `eval title=if(payload.rule_description, payload.rule_description, event_type)`  doesn't work. Thanks
Microsoft Azure Add on for Splunk upgrading via deployment server wipes out all the inputs. Is there a way to preserve them? I always take a backup of the inputs and re-add but then you lose the look... See more...
Microsoft Azure Add on for Splunk upgrading via deployment server wipes out all the inputs. Is there a way to preserve them? I always take a backup of the inputs and re-add but then you lose the look back period. Thanks in advance
Hi, we have a directory with daily log files I want to read into Splunk 8.1.5: /dir1/dir2/dir3/dir4/file-20220309.log, file-20220308.log, ... Version A, working: "[monitor:///dir1/dir2/dir3/dir4... See more...
Hi, we have a directory with daily log files I want to read into Splunk 8.1.5: /dir1/dir2/dir3/dir4/file-20220309.log, file-20220308.log, ... Version A, working: "[monitor:///dir1/dir2/dir3/dir4]" Version B, working: "[monitor:///dir1/*/d*/dir4/*]" Version C, failing: "[monitor:///dir1/*/d*/dir4]" Version C would in theory match the example of "[monitor:///apache/*/logs]" in the documentation, wouldn't it? That is, as long as "logs" is a directory. Do I miss something here? Do I see a bug? Is there a limit on the number of wildcards in a path? Puzzled in Hamburg Volkmar
Hello, How can I report certain "Query Details" just like we do in the Dashboard, in hourly/daily basis? I tried to find a way around it by creating a dashboard with iFrame using the target URL, bu... See more...
Hello, How can I report certain "Query Details" just like we do in the Dashboard, in hourly/daily basis? I tried to find a way around it by creating a dashboard with iFrame using the target URL, but it contains a lot of un-needed contents, and I can't view any thing that needs scrolling. Going for replicating the "Query Details" in the dashboard is another idea, but creating a dashboard for each query the customer asks for will be painful as well.. Even the report is sent as a blank page! Is there any other way for this to be done? Attached a sample of the "Query Details" and the workaround I mentioned, and the received report.. Regards, Khalid.. The "Query Details": "Query Details" as URL in iFrame Dashboard: Received Report:
Hi, I have 2 timecharts where I need to show a TOTAL count across specified field values. The first timechart must show the total count over all field values and the 2nd timechart must show the tot... See more...
Hi, I have 2 timecharts where I need to show a TOTAL count across specified field values. The first timechart must show the total count over all field values and the 2nd timechart must show the total count over 2 field values. I am unable to incorporate a stats or eval function before the timechart function. Here is what my timecharts currently look like: And here is the respective XML code: Can you please help? Many thanks, Patrick
I am looking for “failed login for ADMIN detected” but because the time in Time is two years late it doesn’t alert. My log sample is: I also have _time 2020-02-23T23:02:20.000+01:00 My sea... See more...
I am looking for “failed login for ADMIN detected” but because the time in Time is two years late it doesn’t alert. My log sample is: I also have _time 2020-02-23T23:02:20.000+01:00 My search so far is:   index=abc sourcetype=def "Failed login for ADMIN detected" | rex field=_raw "(?ms)(?=[^c]*(?:cs2=|c.*cs2=))^(?:[^=\\n]*=){5}(?P<DatabaseEventDate>[^ ]+)" | stats count by duser cs1 cs2 DatabaseEventDate   This gives me a new field with the correct time: DatabaseEventDate 23.02.2022,13:11:39   How can I correct the timestamp without changing the props file (since the basics of the search works for another use case)? Please help!! Thanks in advance
Hi All   I want to ask if you know how to detect if someone change his mobile number on AD.   BR,
There are two environments, INT and PROD. The value of IREFFECTIVEDATE in INT is always the same, as is PROD, however they have different values. I want to know when the value of IREFFECTIVEDATE in i... See more...
There are two environments, INT and PROD. The value of IREFFECTIVEDATE in INT is always the same, as is PROD, however they have different values. I want to know when the value of IREFFECTIVEDATE in its environment changes. Here is a log sample: 2022-03-04 14:13:00.006, IREFFECTIVEDATE="2016-07-01 00:00:00.0", IRLOANRATE="5" So far my search is this: index= xy sourcetype=xy | eval env = if(host=="prod1", "PROD", "INT") | table IREFFECTIVEDATE IRLOANRATE env | head 1 | eval single_value="IREFFECTIVEDATE : ".IREFFECTIVEDATE." | IRLOANRATE : ".IRLOANRATE." | Environment : ".env" | fields single_value | sort 0 _time | streamstats current=f last(IREFFECTIVEDATE) as priorDate last(_time) as priorTime by env | where NOT (IREFFECTIVEDATE=priorDate) | mvcombine single_value delim=" " | nomv single_value Streamstats recognizes the changing value but it needs to be split by env. Any ideas please?
Hi,  I found the following telegraf service monitoring, is that anyway to specify service name (e.g Print Spooler service) receivers:  smartagent/telegraf/win_services:   type: telegraf/win_servi... See more...
Hi,  I found the following telegraf service monitoring, is that anyway to specify service name (e.g Print Spooler service) receivers:  smartagent/telegraf/win_services:   type: telegraf/win_services metrics:  receivers: [smartagent/telegraf/win_services]
Hi, I'm using the .NET SDK and I cannot find how to pass a cancellation token as an argument to cancel the search. Is there any way to do it? Thank you
We want to compare 2 inputlookup files. Lets say we have fields in lookup 1- host- abc, bcd, def, xyz, & lookup 2 host- bcd, xyz required result = abc, def simply we want to show the count of th... See more...
We want to compare 2 inputlookup files. Lets say we have fields in lookup 1- host- abc, bcd, def, xyz, & lookup 2 host- bcd, xyz required result = abc, def simply we want to show the count of the host missing in lookup 1 when compared to lookup 2. we have already tried | inputlookup lookup2 |join type=left host [inputlookup lookup1 |eval check="match" ] |search NOT check=*
Hi, We are going to evaluate Splunk but we are not sure that it will serve everything we need, to see if you can help us. On the one hand we need to upload log files and process them (so far we d... See more...
Hi, We are going to evaluate Splunk but we are not sure that it will serve everything we need, to see if you can help us. On the one hand we need to upload log files and process them (so far we do it with an old version of Elastic Search). They are mainly IIS logs. We also want to collect metrics from the applications we develop. Applications that are on premises. But we are also migrating part of the developments to Azure. We have there from databases to kubernetes and microservices. The Splunk page makes many references to AWS, but not to Azure. Can Splunk Monitor Azure Seamlessly? Monitor Kubernetes, the logs it generates, the microservices, etc. I haven't commented on it, but we wouldn't install Splunk on our own server. Either it would be in your Cloud or directly in our Azure. Thanks, Miguel
Hello All,    One of our indexes ( Name: okta ) has a searchable retention period of 90days as shown in the screenshot. Is there a way to pull data earlier than the 90 day mark ? We want to go ... See more...
Hello All,    One of our indexes ( Name: okta ) has a searchable retention period of 90days as shown in the screenshot. Is there a way to pull data earlier than the 90 day mark ? We want to go back upto last 1 year.    If i change this value to 365 days will it me search thru the old data ( older than 90d ) ? OR is there something more that needs to be done.. ? Thanks in advance    
I have setup Microsoft defender for endpoint inputs with many add on but It looks as though most of the add on are not CIM ready for Endpoint and Malware Data model. I have used  Microsoft 365 Defe... See more...
I have setup Microsoft defender for endpoint inputs with many add on but It looks as though most of the add on are not CIM ready for Endpoint and Malware Data model. I have used  Microsoft 365 Defender Add-on for Splunk - https://splunkbase.splunk.com/app/4959/ Splunk Add-on for Microsoft Security - https://splunkbase.splunk.com/app/6207/#/overview   Which one is CIM ready?       I have used 
Nagios — Splunk Observability Cloud documentation Please assist as I not able to start OTEL service due to the error  "found unknown escape character". Below is the script and how to escape the c... See more...
Nagios — Splunk Observability Cloud documentation Please assist as I not able to start OTEL service due to the error  "found unknown escape character". Below is the script and how to escape the character in the argument?  "LC_ALL=\"en_US.utf8\" C:\Program Files\NSClient++\check_nrpe -H pool.ntp.typhon.net"  
can anyone explain me tsidxWritingLevel variables from 1 to 4 ? tsidxWritingLevel = [1|2|3|4] Reference -  https://docs.splunk.com/Documentation/Splunk/8.1.1/Admin/Indexesconf?_ga=2.85851486.67... See more...
can anyone explain me tsidxWritingLevel variables from 1 to 4 ? tsidxWritingLevel = [1|2|3|4] Reference -  https://docs.splunk.com/Documentation/Splunk/8.1.1/Admin/Indexesconf?_ga=2.85851486.671277735.1646626990-1267829109.1638160623&_gl=1*hp9d3s*_ga*MTI2NzgyOTEwOS4xNjM4MTYwNjIz*_gid*NjcxMjc3NzM1LjE2NDY2MjY5OTA.