All Topics

Top

All Topics

I found this search query online, is there a way to modify it to search for a host on splunk instead of for the actual splunk server?  | rest /services/server/info | eval LastStartupTime=strftime(s... See more...
I found this search query online, is there a way to modify it to search for a host on splunk instead of for the actual splunk server?  | rest /services/server/info | eval LastStartupTime=strftime(startup_time, "%Y/%m/%d %H:%M:%S") | eval timenow=now() | eval daysup = round((timenow - startup_time) / 86400,0) | eval Uptime = tostring(daysup) + " Days" | table splunk_server LastStartupTime Uptime
Hi. In classic dashboard its possible to hidden errors in Dashboards, i.e.: https://community.splunk.com/t5/Dashboards-Visualizations/How-to-hide-error-message-icons-from-dashboard/m-p/272528 ... See more...
Hi. In classic dashboard its possible to hidden errors in Dashboards, i.e.: https://community.splunk.com/t5/Dashboards-Visualizations/How-to-hide-error-message-icons-from-dashboard/m-p/272528 Studio Dashbords using a Json file and I need to use the similar technique as link above in Json file used by Dashboard Studio. I checked documentation and I don't find anything how to hidde errors in the Dashbord Studio. Anyone can help me how to avoid any errors in Dashboard Studio? Thanks
I have my data as follows: | table envName, envAcronym, envCluster I am using envName as Label of the dropdown but want to use envAcronym and envCluster as the value. From the dropdown editor, I ... See more...
I have my data as follows: | table envName, envAcronym, envCluster I am using envName as Label of the dropdown but want to use envAcronym and envCluster as the value. From the dropdown editor, I dont see a way to specify multiple values. Is there a way to achieve what I am looking for? I tried setting the value as a json object but then dereferencing the variables with dot notation does not work. For example I tried to reference value as "$selectedEnv.envAcronym$" and $selectedEnv.envCluster$ but it does not work.
I have the code below and I need to get the statuses yesterday and today with respect to API value. My current search is below.  index="l7" earliest=-1d@d latest=now | eval status=case(response_st... See more...
I have the code below and I need to get the statuses yesterday and today with respect to API value. My current search is below.  index="l7" earliest=-1d@d latest=now | eval status=case(response_status<400 AND severity="Audit", "Success_count", response_status>=400 and response_status<500, "Backend_4XX",response_status>=500, "Backend_5XX",response_status==0 AND severity="Exception", "L7_Error") | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") I need my data to be grouped separately or side by side. I need your help in achieving this.  
how do I send events of two different indexes to a different sourcetype than the one I already have? I have to put them in another sourcetype, all the events of the two indexes. how do i configure ... See more...
how do I send events of two different indexes to a different sourcetype than the one I already have? I have to put them in another sourcetype, all the events of the two indexes. how do i configure the props.conf and transforms.conf?
I want to find time difference between two events (duration some operation took) and plot a graph which shows how much time it took for each of the entity ... I gave some query mentioned below : ... See more...
I want to find time difference between two events (duration some operation took) and plot a graph which shows how much time it took for each of the entity ... I gave some query mentioned below : <base_search>| | eval duration = duration_seconds + (60 * (duration_minutes + (60 * duration_hours))) | fieldformat duration = tostring(duration, "duration") | fieldformat duration_in_minutes = duration / 60   Now i got correct output in the form of a table , but with some extra fields  I need first column (cls_id) and last column (duration_in_minutes) , Can someone help how can i get that? I tried appending | table cls_id , duration_in_minutes , but that gives null value for "duration_in_minutes" field/column.
Hi. I'm using Splunk Enterprise 9.0.4 on-Prem. The Search head has been set up with AzureAD as IdP and normal user login functions as expected. I tried to connect the Splunk Mobile App to my se... See more...
Hi. I'm using Splunk Enterprise 9.0.4 on-Prem. The Search head has been set up with AzureAD as IdP and normal user login functions as expected. I tried to connect the Splunk Mobile App to my search head, but it complains that "SAML needs to be set up for Connected Experiences before devices can be registered", so I log on as administrator, and navigate to "SAML Configuration" in Splunk Secure Gateway. Here it states, that I need to connect to a SAML IdP, and when I look at Okta or Azure it states this: "To use Okta or Azure, use a provided authentication script to establish a persistent connection." Now it seems that there should be a provided script, that I can use in my SAML configuration, I just can't find anywhere, where it states wich script it is.   Hopefully someone is less blind than me, and can point me in the right direction.   Kind regards /las  
Hello Splunkers! I am using "transaction" command to merge multiple logs based on a mutual field between them. To clarify, I have email logs, the issue is that for 1 email I receive 4 logs in the fo... See more...
Hello Splunkers! I am using "transaction" command to merge multiple logs based on a mutual field between them. To clarify, I have email logs, the issue is that for 1 email I receive 4 logs in the following order: from subject attachment to They all have one field in common: id. I am using the following transaction command:  | transaction id startswith=from endswith=to   The issue is that it merges only the two logs containing "from" and "to". Can you please verify if I am using the command correctly because I need it to also merge the logs in between not only "from" and "to".
My verbose mode and fast mode results are different . How to run a scheduled search in verbose mode by default?   added this parameter in the search stanza in savedsearches.conf display.page.searc... See more...
My verbose mode and fast mode results are different . How to run a scheduled search in verbose mode by default?   added this parameter in the search stanza in savedsearches.conf display.page.search.mode = verbose But no effect, it is still running in fast mode. Splunk Enterprise Version: 9.0.4.1
I've just upgraded to Splunk v9.1.0.1 in a stand-alone (S1 SVA) lab instance from 9.0.x. All fine and operating at a basic level, see attached image. However since the upgrade there is a UI issue.  ... See more...
I've just upgraded to Splunk v9.1.0.1 in a stand-alone (S1 SVA) lab instance from 9.0.x. All fine and operating at a basic level, see attached image. However since the upgrade there is a UI issue.     Unable to load app list. Refresh the page to try again.     I have checked splunkd error log and I'm not seeing anything notable:     index="_internal" source="/opt/splunk/var/log/splunk/splunkd.log" log_level IN (ERROR, WARN) | table event_message | dedup event_message      I have also verified consistency of ownership of $SPLUNK_HOME: sudo find /opt/splunk -printf '%u:%g\n' | sort -t: -u  I'm not seeing anything obvious and I have checked the Known Issues list. Has anybody else seen this before I expend a lot of effort on a new release. I should mention that this is now using a Free 500MB license, I have a 10GB NFR license waiting to renew but I don't see this as an issue and it is not in violation.
We're updating our Linux Servers to Debian 12. A few host went "missing" afterwards in Splunk. While investigating into it I found out that they were in fact not missing, but they stopped writing lo... See more...
We're updating our Linux Servers to Debian 12. A few host went "missing" afterwards in Splunk. While investigating into it I found out that they were in fact not missing, but they stopped writing logfiles to /var/log. Seems like Debian switched to full journald, as I was promoted with this ReadMe in /var/log: You are looking for the traditional text log files in /var/log, and they are gone? Here's an explanation on what's going on: You are running a systemd-based OS where traditional syslog has been replaced with the Journal. The journal stores the same (and more) information as classic syslog. To make use of the journal and access the collected log data simply invoke "journalctl", which will output the logs in the identical text-based format the syslog files in /var/log used to be. For further details, please refer to journalctl(1). [...]  Of course we can simply install the rsyslog package again, but this is feels more like a step backwards. So here is my question: Is there a default and generic approach for collecting all system and service logs from journald that we can use on our UFs, since Logfiles are obviously not the future on Linux. Best regards
Hi Team, I am getting raw log as below: 2023-07-22 09:18:19.454 [INFO ] [Thread-3] AssociationProcessor - compareTransformStatsData : statisticData: StatisticData [selectedDataSet=0, rejectedDataSe... See more...
Hi Team, I am getting raw log as below: 2023-07-22 09:18:19.454 [INFO ] [Thread-3] AssociationProcessor - compareTransformStatsData : statisticData: StatisticData [selectedDataSet=0, rejectedDataSet=0, totalOutputRecords=19996779, totalInputRecords=0, fileSequenceNum=0, fileHeaderBusDt=null, busDt=07/21/2023, fileName=SETTLEMENT_TRANSFORM_MERGE, totalAchCurrOutstBalAmt=0.0, totalAchBalLastStmtAmt=0.0, totalClosingBal=8.933513237882E10, sourceName=null, version=1, associationStats={}] ---- controlFileData: ControlFileData [fileName=SETTLEMENT_TRANSFORM_ASSOCIATION, busDate=07/21/2023, fileSequenceNum=0, totalBalanceLastStmt=0.0, totalCurrentOutstBal=0.0, totalRecordsWritten=19996779, totalRecords=0, totalClosingBal=8.933513237882E10] I want to show each count separately how can we show that: totalOutputRecords=19996779, totalClosingBal=8.933513237882E10 How can we create query like this: index= "abc" sourcetype = "600000304_gg_abs_ipc2" "AssociationProcessor  
Hi Team, I am getting these two logs on daily basis: 2023-07-17 08:05:59.764 [INFO ] [Thread-3] TransformProcessor - Started ASSOCIATION process for BusDt=07/16/2023, & version=1 2023-07-17 08:... See more...
Hi Team, I am getting these two logs on daily basis: 2023-07-17 08:05:59.764 [INFO ] [Thread-3] TransformProcessor - Started ASSOCIATION process for BusDt=07/16/2023, & version=1 2023-07-17 08:52:44.484 [INFO ] [Thread-3] AssociationProcessor - Successfully completed ASSOCIATION process!! isAssociationBalanced?=true 2023-07-18 08:04:59.764 [INFO ] [Thread-3] TransformProcessor - Started ASSOCIATION process for BusDt=07/17/2023, & version=1 2023-07-18 08:52:44.484 [INFO ] [Thread-3] AssociationProcessor - Successfully completed ASSOCIATION process!! isAssociationBalanced?=true I want to create one query where I can calculate average time between process start and complete  2023-07-17 08:05:59.764 [INFO ] [Thread-3] TransformProcessor - Started ASSOCIATION process for BusDt=07/16/2023, & version=1 2023-07-17 08:52:44.484 [INFO ] [Thread-3] AssociationProcessor - Successfully completed ASSOCIATION process!! isAssociationBalanced?=true My current query is this : index= "600000304_d_gridgain_idx*" sourcetype = "600000304_gg_abs_ipc2"  source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" Can someone guide me how to move forward and create average query.
Hi Folks   When I enter the Ingest actions page from our Splunk portal, we get the error shown below.   "Unable to load sourcetypes: An unexpected error occurred" I also attempted... See more...
Hi Folks   When I enter the Ingest actions page from our Splunk portal, we get the error shown below.   "Unable to load sourcetypes: An unexpected error occurred" I also attempted to clear the browser's cookies, which worked for a short time and returning to the same error page.   Is anyone aware of this problem? If this is the case, please provide an approach for eliminating it.   How to reproduce an issue.  Splunk Homepage > Setting > Ingest Action > Click on any rule  
Lets say my colddb space is 15TB  and volume datasize is 20TB as below (indexer.conf) what will be issues it may cause ? or it is ok ?   df -h | grep sde sde 8:64 0 32T 0 disk  -sde1   8:65 0 ... See more...
Lets say my colddb space is 15TB  and volume datasize is 20TB as below (indexer.conf) what will be issues it may cause ? or it is ok ?   df -h | grep sde sde 8:64 0 32T 0 disk  -sde1   8:65 0 15T 0 part /apps/splunk/colddb   On the Indexer Cluster Master server :  vi /apps/splunk/etc/master-apps/fmrei_all_indexes_frozen/local/indexes.conf [volume:secondary] path = /apps/splunk/colddb maxVolumeDataSizeMB = 20000000  
Hello Splunkers, Whats is "the best practice" to ingest DNS logs inside a distributed Splunk environment.  I hesitate between two possibilities (maybe there are others) : - Install a UF on my DNS s... See more...
Hello Splunkers, Whats is "the best practice" to ingest DNS logs inside a distributed Splunk environment.  I hesitate between two possibilities (maybe there are others) : - Install a UF on my DNS servers and simply monitor the path where my DNS logs are located and then forward the logs to my Splunk env. -  Or use the Stream App, which seems a little bit more complicated : https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/AboutSplunkStream Let me know what you used / think about that, Thanks a lot ! GaetanVP  
Dears, We would like to report an issue related to Splunk-ES during the navigation of the “Search” window. We are not able anymore to: Save any INTERESTING FIELDS in SELECTED FIELDS once a field ... See more...
Dears, We would like to report an issue related to Splunk-ES during the navigation of the “Search” window. We are not able anymore to: Save any INTERESTING FIELDS in SELECTED FIELDS once a field is selected for a future search.   Keep the selected “Mode” in “Search” windows once we open a new “Search” Window.   Have you also encountered this problem? Any solution? Many thanks for your help.
Hi,   I heard through the grapevine that APM agent can now connect directly to ThousandEyes.  Is this true?   And is there any instructional documentation that shows how to configure the agents to ac... See more...
Hi,   I heard through the grapevine that APM agent can now connect directly to ThousandEyes.  Is this true?   And is there any instructional documentation that shows how to configure the agents to achieve it ?
Hi All, We are basically forwarding the cloudflare firewall events to Splunk, we have enabled "payload logging" to view what payload was send by the user. Unfortunately the payload data which is ... See more...
Hi All, We are basically forwarding the cloudflare firewall events to Splunk, we have enabled "payload logging" to view what payload was send by the user. Unfortunately the payload data which is forward to splunk is encrypted and we are not sure what tool to use to decrypt it. We do hold this private keys with us, but how to decrypt that in the splunk search is the question. We tried installing DECRYPT2 APP on Splunk but that is also of no help.   Has anyone come across this type of issues and how have they fixed it. Request someone to suggest how to proceed with this.
Hi, on appdynamics documentation there is an option  sim.cluster.logs.capture.enabled The documentation says "This option is disabled by default." and the default value is "true".  A little bit co... See more...
Hi, on appdynamics documentation there is an option  sim.cluster.logs.capture.enabled The documentation says "This option is disabled by default." and the default value is "true".  A little bit confusing, because logically sim.cluster.logs.capture.enabled - true  - means that log capturing is enabled So if I want to enable log capturing logs I must set the value to "False"? https://docs.appdynamics.com/appd/23.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/administer-the-cluster-agent/enable-log-collection-for-failing-pods