All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Stopping splunkd is taking up to 6 minutes to complete.  We have a process that snapshots the instance and we are stopping splunkd prior to taking that snapshot.  Previously with v9.0.1 we did not ex... See more...
Stopping splunkd is taking up to 6 minutes to complete.  We have a process that snapshots the instance and we are stopping splunkd prior to taking that snapshot.  Previously with v9.0.1 we did not experience this; now we are on v9.2.1. While shutting down I am monitoring spklunkd.log and the only errors I am seeing has to do with the HFs.  'TcpInputProc [65700 tcp] - Waiting for all connections to close before shutting down TcpInputProcessor '. Has anyone else experienced something similar post upgrade?  
That did the trick, think you again for the excellent help.  Have a good week.   Thanks, Tom
Hi All, This the query which i try to get status.But in the table its shows both error and success.PFA screenshot | eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error... See more...
Hi All, This the query which i try to get status.But in the table its shows both error and success.PFA screenshot | eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error while processing*","ERROR", priority="WARN","WARN",priority!="ERROR" AND tracePoint!="EXCEPTION" OR message!="*(ERROR):*","SUCCESS") |stats values(Status) as Status by transactionId
How can i resolve this error  "Couldn't complete HTTP request: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure".  I keep getting this error on splunkforwarder when... See more...
How can i resolve this error  "Couldn't complete HTTP request: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure".  I keep getting this error on splunkforwarder when i run SPLUNK_HOME/splunk list monitor,  SPLUNK_HOME/splunk list inputstatus.
I am trying to compute the R-squared value of a set of measured values, to verify the performance or accuracy of a predictive model. But I can't figure out how to go about this or if Splunk has a fun... See more...
I am trying to compute the R-squared value of a set of measured values, to verify the performance or accuracy of a predictive model. But I can't figure out how to go about this or if Splunk has a function or command for this Thanks
I have a dashboard that I use when checking if a server is compliant.  It looks normal in the dashboard but when I export it as a PDF the last column gets moved to a new page.  I found this in ./etc/... See more...
I have a dashboard that I use when checking if a server is compliant.  It looks normal in the dashboard but when I export it as a PDF the last column gets moved to a new page.  I found this in ./etc/system/bin/pdfgen_endpoint.py DEFAULT_PAPER_ORIENTATION = 'portrait' What I can't find is a way of overriding the default to change it to landscape.  Does such a file exist?  If not, beyond changing the code, any ideas on how to get a landscape report so the final column will be on the same page? TIA Joe
For "uptime", just subtract your downtime from 100%. Something like this at the end: | eval percent_uptime = 100 - percent_downtime Hope that works too!
I encountered a similar issue. My scenario involved comparing two alerts and wanting to write the results of the test alert to an index while maintaining the same configurations (like throttling) for... See more...
I encountered a similar issue. My scenario involved comparing two alerts and wanting to write the results of the test alert to an index while maintaining the same configurations (like throttling) for both.  Using collect wouldn't work, because it was writing duplicate entries to the index due to the alert configuration. I managed to achieve this by directing all the results to: | tojson output_field="foo"   Then in the event field you can just enter: $result.foo$  
Hi @adrifesa95 , it shouldn't be a probem: on your HF, you can receive logs on port 9997 and send logs to Splunk Cloud on 9997 port. Check if from the UFs you can reach the HF (using e.g. telnet). ... See more...
Hi @adrifesa95 , it shouldn't be a probem: on your HF, you can receive logs on port 9997 and send logs to Splunk Cloud on 9997 port. Check if from the UFs you can reach the HF (using e.g. telnet). Ciao. Giuseppe
With the information from both and research, I found the answer that I was looking for: | stats values(host) as host | eval host="(".mvjoin(host,",").")" | nomv host |eval description=host." host ... See more...
With the information from both and research, I found the answer that I was looking for: | stats values(host) as host | eval host="(".mvjoin(host,",").")" | nomv host |eval description=host." host have failed"   the results gave me what I was looking for: (host1,host2,host3....) host have failed the stats command made the host a multivalue field, the mvjoin allowed the commas between, and the nomv took away the multivalue and made it a normal field. Thanks for ideas. Appreciate the time from your busy schedules.
Hi @Nawaz Ali.Mohammad, Can you jump in and help out @Pallavi.Lohar withe his question?
You are awesome, I was able to get it working. index=healthcheck integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE_117_RSO - Unreachable" "entity.source"=Meraki | rename aler... See more...
You are awesome, I was able to get it working. index=healthcheck integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE_117_RSO - Unreachable" "entity.source"=Meraki | rename alert.message AS "Branch" | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true Branch | stats sum(duration) as total_duration by Branch | eval percent_downtime = (total_duration / (86400*7)) *100   Sorry, I just have one last question, this actually gives me the Downtime, how would I also show a percentage of Uptime? Wish I could give you 100 Kudos  
Hello, I am currently working on building out a GUI for the software I work on and am looking for a way to query data from our Splunk instance to use in our front-end. I have looked at the documenta... See more...
Hello, I am currently working on building out a GUI for the software I work on and am looking for a way to query data from our Splunk instance to use in our front-end. I have looked at the documentation here Splunk Design System as well as some code examples here GitHub - splunk/react_search_example, but I cannot find a straight forward answer for how to hook into our Splunk instance and query data from it. From the documentation and examples it seems like what I am trying to is definitely possible, I just can't figure out how.  Any help is greatly appreciated. Kevin
hello I need to determine the app name based on a lookup table for the SPL search below. the SPL search results has a field, called SQL, which has the sql syntax which contains one of the keywords i... See more...
hello I need to determine the app name based on a lookup table for the SPL search below. the SPL search results has a field, called SQL, which has the sql syntax which contains one of the keywords in a field of the lookup table. I am not sure if join, union, inputlookup, lookup and/or combination of where command will solve this puzzle. Any help is apreciated. the lookup file name is: lookup_weblogic_app.csv the lookup file sample values are: lk_wlc_app_short lk_wlc_app_name ART Attendance Roster Tool Building_Mailer Building Mailer SCBT Service Center Billing Tool SPL search results: SQL ''' as "FIELD",''Missing Value'' AS "ERROR" from scbt_owner.SCBT_LOAD_CLOB_DATA_WORK ''' as "something ",''Missing Value'' AS "ERROR" from ART_owner.ART_LOAD_CLOB_DATA_WORK from Building_Mailer_owner.Building_Mailer_ SPL final outcome desire: lk_wlc_app_short SQL scbt ''' as "FIELD",''Missing Value'' AS "ERROR" from scbt_owner.SCBT_LOAD_CLOB_DATA_WORK ATR ''' as "something ",''Missing Value'' AS "ERROR" from ART_owner.ART_LOAD_CLOB_DATA_WORK Building_Mailer from Building_Mailer_owner.Building_Mailer_
Ah, so you have that part. The HF does not need to be able to see the indexes if the outputs are set up correctly. You can use, at the end of your existing ldapsearch - ... | collect <indexname> ... See more...
Ah, so you have that part. The HF does not need to be able to see the indexes if the outputs are set up correctly. You can use, at the end of your existing ldapsearch - ... | collect <indexname>  Which should just tuck that data into the index you name there. Again, as long as the index exists on the indexer, your HF doesn't need to "see" the index.  It should "just work".  Which brings up the point that if it doesn't work, I'd suspect your forwarding to your cloud is not actually set up right, but that's a different issue. 
Hello,  I have just started to ingest some log files that are split up by lines e.g. -------- however for some reason Splunk is splitting the one log file into multiple events, can someone help me ... See more...
Hello,  I have just started to ingest some log files that are split up by lines e.g. -------- however for some reason Splunk is splitting the one log file into multiple events, can someone help me figure this out? example log attached. My input file is currently set as: [monitor://C:\ProgramData\XXX\XXX\CaseManagement*.log] disabled = 0 interval = 60 index = XXXXlogs sourcetype = jlogs Do I need a props file and if so what do I put in it?
I'd love to see a small sample of the data this is based on (and please remember to use the code button to enter it so the browser/system doesn't eat special characters). in any case, 1) your output... See more...
I'd love to see a small sample of the data this is based on (and please remember to use the code button to enter it so the browser/system doesn't eat special characters). in any case, 1) your output has Created and Branch, Branch being "alert.message".  Yet you don't include this in your transaction? Here's a run-anywhere search that illustrates the technique.   | makeresults format="CSV" data="time, action, branch 1715258900, create, bigville 1715251900, close, bigville 1715254900, create, smallville 1715253920, close, smallville 1715228900, create, bigville 1715211970, close, bigville" | eval _time = time | transaction maxspan=5h branch    In this case we have two branches, "bigville" and "smallville".  The first 7 lines just build a set of data to work with.  We then convert time into "the real time of the event". The meat is the transaction, we are now doing it "by branch" (though 'transaction' doesn't use the keyword "by".)  So if you run the above - you'll see we create 3 transactions, each has a duration field in it.  (I had to fiddle with the maxspan to get my silly test data to work right). Now, let's add this to the end -   | stats sum(duration) as total_duration by branch   And poof, we now have a total sum of the duration fields for each branch.  Once we have that, we can add to the end...   | eval percent_uptime = (total_duration / (86400*7)) *100   and there's our percent uptime.  Obviously smallville has some problems.  So, untested (I don't have your data), but I think this should work for you:   index=healthcheck integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE*" "entity.source"=Meraki | rename alert.message AS "Branch" | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true Branch | where closed_txn=0 | spath 'alert.createdAt' | stats sum(duration) as total_duration, latest(Created) as Created by Branch | eval Created=strftime ('alert.createdAt'/1000,"%m-%d-%Y %I:%M:%S %p") | eval percent_uptime = (total_duration / (86400*7)) *100   I moved your rename to earlier (because life is easier this way), added "Branch" to your transaction, left most of that middle bit alone, added the stats to sum the duration of the transactions and to snag the latest "Create" from the event (again by "Branch"), then a bit of cleanup and math. Give it a try.  And as always, if something's not working right start chopping lines off the end of that search until you get back to data that makes sense, analyze it one line at a time going forward being careful to figure out how each step works and what it does and that its results are right (and fixing it if it isn't), then proceeding.  Sort of how I gave you the run-anywhere example, splitting it out into three sets of search so you can see how it builds.  
I too would like to understand why we can't add a logo to an app created via the cloud Portal. Has this been looked at as yet as an update as would make life so much easier?
Hello @Richfez  Thank you for the quick response. We have HF configured and is forwarding the data to the IDX. My scenario is, We have installed LDAPSearch app in the HF, We are able to run LDAP s... See more...
Hello @Richfez  Thank you for the quick response. We have HF configured and is forwarding the data to the IDX. My scenario is, We have installed LDAPSearch app in the HF, We are able to run LDAP searches on the HF Web UI. we want to index those output in a an index we have created in the splunk cloud.  I was thinking that ill create the report as search and add the action to log those events, but that did not work as it HF is not able to see the indexes. I am looking for any way to achieve that. Thanks Murali 
Hello, Thank you for the very quick response, much appreciated and helpful.  I have been testing the uptime you provided to obtain the percentage, but am not very good yet at the search creations.  ... See more...
Hello, Thank you for the very quick response, much appreciated and helpful.  I have been testing the uptime you provided to obtain the percentage, but am not very good yet at the search creations.  This is the Search I am using:   index=healthcheck integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE*" "entity.source"=Meraki | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true | where closed_txn=0 | fields alert.updatedAt, alert.message, alertAlias, alert.id, action, "alertDetails.Alert Details URL", closed_txn, _time, dv_number, "alert.createdAt" | spath 'alert.createdAt' | eval Created=strftime ('alert.createdAt'/1000,"%m-%d-%Y %I:%M:%S %p") | rename alert.message AS "Branch" | table Created, Branch | sort by Created DESC Can't figure out what the stats sum(duration) should be by.  The goal is to have a percentage of the time between the Create and Close Transaction out of 7 days. Thanks again for all of the help, Tom