All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

With the information from both and research, I found the answer that I was looking for: | stats values(host) as host | eval host="(".mvjoin(host,",").")" | nomv host |eval description=host." host ... See more...
With the information from both and research, I found the answer that I was looking for: | stats values(host) as host | eval host="(".mvjoin(host,",").")" | nomv host |eval description=host." host have failed"   the results gave me what I was looking for: (host1,host2,host3....) host have failed the stats command made the host a multivalue field, the mvjoin allowed the commas between, and the nomv took away the multivalue and made it a normal field. Thanks for ideas. Appreciate the time from your busy schedules.
Hi @Nawaz Ali.Mohammad, Can you jump in and help out @Pallavi.Lohar withe his question?
You are awesome, I was able to get it working. index=healthcheck integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE_117_RSO - Unreachable" "entity.source"=Meraki | rename aler... See more...
You are awesome, I was able to get it working. index=healthcheck integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE_117_RSO - Unreachable" "entity.source"=Meraki | rename alert.message AS "Branch" | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true Branch | stats sum(duration) as total_duration by Branch | eval percent_downtime = (total_duration / (86400*7)) *100   Sorry, I just have one last question, this actually gives me the Downtime, how would I also show a percentage of Uptime? Wish I could give you 100 Kudos  
Hello, I am currently working on building out a GUI for the software I work on and am looking for a way to query data from our Splunk instance to use in our front-end. I have looked at the documenta... See more...
Hello, I am currently working on building out a GUI for the software I work on and am looking for a way to query data from our Splunk instance to use in our front-end. I have looked at the documentation here Splunk Design System as well as some code examples here GitHub - splunk/react_search_example, but I cannot find a straight forward answer for how to hook into our Splunk instance and query data from it. From the documentation and examples it seems like what I am trying to is definitely possible, I just can't figure out how.  Any help is greatly appreciated. Kevin
hello I need to determine the app name based on a lookup table for the SPL search below. the SPL search results has a field, called SQL, which has the sql syntax which contains one of the keywords i... See more...
hello I need to determine the app name based on a lookup table for the SPL search below. the SPL search results has a field, called SQL, which has the sql syntax which contains one of the keywords in a field of the lookup table. I am not sure if join, union, inputlookup, lookup and/or combination of where command will solve this puzzle. Any help is apreciated. the lookup file name is: lookup_weblogic_app.csv the lookup file sample values are: lk_wlc_app_short lk_wlc_app_name ART Attendance Roster Tool Building_Mailer Building Mailer SCBT Service Center Billing Tool SPL search results: SQL ''' as "FIELD",''Missing Value'' AS "ERROR" from scbt_owner.SCBT_LOAD_CLOB_DATA_WORK ''' as "something ",''Missing Value'' AS "ERROR" from ART_owner.ART_LOAD_CLOB_DATA_WORK from Building_Mailer_owner.Building_Mailer_ SPL final outcome desire: lk_wlc_app_short SQL scbt ''' as "FIELD",''Missing Value'' AS "ERROR" from scbt_owner.SCBT_LOAD_CLOB_DATA_WORK ATR ''' as "something ",''Missing Value'' AS "ERROR" from ART_owner.ART_LOAD_CLOB_DATA_WORK Building_Mailer from Building_Mailer_owner.Building_Mailer_
Ah, so you have that part. The HF does not need to be able to see the indexes if the outputs are set up correctly. You can use, at the end of your existing ldapsearch - ... | collect <indexname> ... See more...
Ah, so you have that part. The HF does not need to be able to see the indexes if the outputs are set up correctly. You can use, at the end of your existing ldapsearch - ... | collect <indexname>  Which should just tuck that data into the index you name there. Again, as long as the index exists on the indexer, your HF doesn't need to "see" the index.  It should "just work".  Which brings up the point that if it doesn't work, I'd suspect your forwarding to your cloud is not actually set up right, but that's a different issue. 
Hello,  I have just started to ingest some log files that are split up by lines e.g. -------- however for some reason Splunk is splitting the one log file into multiple events, can someone help me ... See more...
Hello,  I have just started to ingest some log files that are split up by lines e.g. -------- however for some reason Splunk is splitting the one log file into multiple events, can someone help me figure this out? example log attached. My input file is currently set as: [monitor://C:\ProgramData\XXX\XXX\CaseManagement*.log] disabled = 0 interval = 60 index = XXXXlogs sourcetype = jlogs Do I need a props file and if so what do I put in it?
I'd love to see a small sample of the data this is based on (and please remember to use the code button to enter it so the browser/system doesn't eat special characters). in any case, 1) your output... See more...
I'd love to see a small sample of the data this is based on (and please remember to use the code button to enter it so the browser/system doesn't eat special characters). in any case, 1) your output has Created and Branch, Branch being "alert.message".  Yet you don't include this in your transaction? Here's a run-anywhere search that illustrates the technique.   | makeresults format="CSV" data="time, action, branch 1715258900, create, bigville 1715251900, close, bigville 1715254900, create, smallville 1715253920, close, smallville 1715228900, create, bigville 1715211970, close, bigville" | eval _time = time | transaction maxspan=5h branch    In this case we have two branches, "bigville" and "smallville".  The first 7 lines just build a set of data to work with.  We then convert time into "the real time of the event". The meat is the transaction, we are now doing it "by branch" (though 'transaction' doesn't use the keyword "by".)  So if you run the above - you'll see we create 3 transactions, each has a duration field in it.  (I had to fiddle with the maxspan to get my silly test data to work right). Now, let's add this to the end -   | stats sum(duration) as total_duration by branch   And poof, we now have a total sum of the duration fields for each branch.  Once we have that, we can add to the end...   | eval percent_uptime = (total_duration / (86400*7)) *100   and there's our percent uptime.  Obviously smallville has some problems.  So, untested (I don't have your data), but I think this should work for you:   index=healthcheck integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE*" "entity.source"=Meraki | rename alert.message AS "Branch" | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true Branch | where closed_txn=0 | spath 'alert.createdAt' | stats sum(duration) as total_duration, latest(Created) as Created by Branch | eval Created=strftime ('alert.createdAt'/1000,"%m-%d-%Y %I:%M:%S %p") | eval percent_uptime = (total_duration / (86400*7)) *100   I moved your rename to earlier (because life is easier this way), added "Branch" to your transaction, left most of that middle bit alone, added the stats to sum the duration of the transactions and to snag the latest "Create" from the event (again by "Branch"), then a bit of cleanup and math. Give it a try.  And as always, if something's not working right start chopping lines off the end of that search until you get back to data that makes sense, analyze it one line at a time going forward being careful to figure out how each step works and what it does and that its results are right (and fixing it if it isn't), then proceeding.  Sort of how I gave you the run-anywhere example, splitting it out into three sets of search so you can see how it builds.  
I too would like to understand why we can't add a logo to an app created via the cloud Portal. Has this been looked at as yet as an update as would make life so much easier?
Hello @Richfez  Thank you for the quick response. We have HF configured and is forwarding the data to the IDX. My scenario is, We have installed LDAPSearch app in the HF, We are able to run LDAP s... See more...
Hello @Richfez  Thank you for the quick response. We have HF configured and is forwarding the data to the IDX. My scenario is, We have installed LDAPSearch app in the HF, We are able to run LDAP searches on the HF Web UI. we want to index those output in a an index we have created in the splunk cloud.  I was thinking that ill create the report as search and add the action to log those events, but that did not work as it HF is not able to see the indexes. I am looking for any way to achieve that. Thanks Murali 
Hello, Thank you for the very quick response, much appreciated and helpful.  I have been testing the uptime you provided to obtain the percentage, but am not very good yet at the search creations.  ... See more...
Hello, Thank you for the very quick response, much appreciated and helpful.  I have been testing the uptime you provided to obtain the percentage, but am not very good yet at the search creations.  This is the Search I am using:   index=healthcheck integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE*" "entity.source"=Meraki | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true | where closed_txn=0 | fields alert.updatedAt, alert.message, alertAlias, alert.id, action, "alertDetails.Alert Details URL", closed_txn, _time, dv_number, "alert.createdAt" | spath 'alert.createdAt' | eval Created=strftime ('alert.createdAt'/1000,"%m-%d-%Y %I:%M:%S %p") | rename alert.message AS "Branch" | table Created, Branch | sort by Created DESC Can't figure out what the stats sum(duration) should be by.  The goal is to have a percentage of the time between the Create and Close Transaction out of 7 days. Thanks again for all of the help, Tom
OK, so here's the steps: https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/ConfigSCUFCredentials#Install_the_forwarder_credentials_on_individual_forwarders_in_.2Anix It's strange that... See more...
OK, so here's the steps: https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/ConfigSCUFCredentials#Install_the_forwarder_credentials_on_individual_forwarders_in_.2Anix It's strange that those instructions are not to be found at the Splunk Cloud Forwarder manual, but I've sent in some feedback on that and hopefully they'll make the above-linked instructions easier to find.   Happy Splunking, and if you found this useful then karma would be appreciated! -Rich
Assuming DT is the date you want to use and you already have your data in this format, try this | untable DT category state | where state="H" or (category="OVERAL" and state="C") | streamstats windo... See more...
Assuming DT is the date you want to use and you already have your data in this format, try this | untable DT category state | where state="H" or (category="OVERAL" and state="C") | streamstats window=1 current=f values(state) as previous by DT | where state="C" and previous="H" | stats count
In Splunk Cloud - Settings, Server Settings, Email settings. Look for this section. Here's more information at Splunk's docs for PDF generation.
If now is no longer an option, there should be no reason to check for it! Try something like this <eval token="latest_Time">relative_time($time.latest$, $timedrop$)</eval> Don't forget to change t... See more...
If now is no longer an option, there should be no reason to check for it! Try something like this <eval token="latest_Time">relative_time($time.latest$, $timedrop$)</eval> Don't forget to change the default for the dropdown to something other than now
There were a few errors, but this should work.  Note I broke out the comparison_date calculation from the eval where you decide if they need to reset or not, to a) make it more clear and b) so you ca... See more...
There were a few errors, but this should work.  Note I broke out the comparison_date calculation from the eval where you decide if they need to reset or not, to a) make it more clear and b) so you can see the dates/strings it's comparing with. | makeresults format="CSV" data="date 2024-05-09T08:05:00 2024-02-09T08:05:00" | eval epoch_lastLogonTimestamp_date = strptime(date, "%Y-%m-%dT%H:%M:%S") | eval last_logon_total = strftime(epoch_lastLogonTimestamp_date, "%d/%m/%Y") | eval comparison_date = relative_time(now(),"-61d@d") | eval action = if(epoch_lastLogonTimestamp_date <= comparison_date, "reset password", "no change needed")   I think the biggest issue was that the epoch date is the only one you need.  Do your math on it, work with it.  If you need to see it in a more human readable version, you can convert it back at the end.  In this case, 'last_logon_total' is simply unused after you build it.   Happy splunking, and if this helped karma would be appreciated! -Rich
How do you determine what the day is because in your example DT doesn't always equate to the date shown in _time?
Hello All, I have an LDAPsearch app installed in one of the onprem Heavy Forwarders and I need to index the search out put into an index we have created. Our IDX and SH are on splunk cloud. Would a... See more...
Hello All, I have an LDAPsearch app installed in one of the onprem Heavy Forwarders and I need to index the search out put into an index we have created. Our IDX and SH are on splunk cloud. Would appreciate all the suggestions Thanks in advance. Murali
gcusello, what I meant to do is using: index=xta then I want to pull fma_id, Org_unify, description, AH tag and the ISO name. Using fma_id that is pulled from xta example index=*OS-001* and report ... See more...
gcusello, what I meant to do is using: index=xta then I want to pull fma_id, Org_unify, description, AH tag and the ISO name. Using fma_id that is pulled from xta example index=*OS-001* and report all index that have that fma_id as part of the string then run a count on all hosts/systems that have the UF installed and register to that fma_id by type (windows vs Linux) then I have to check the available data sets to look for the host in by checking the hosts under fma ID exist in AD, Defender, Big Fix, and Tenable and when they were last detected. Thanks a lot for your help on this matter.
I think we're missing some details to be able to provide *the answer* for you, but I can certainly point you in the right direction! You have a transaction, so you have duration for each transaction... See more...
I think we're missing some details to be able to provide *the answer* for you, but I can certainly point you in the right direction! You have a transaction, so you have duration for each transaction. So you'll want to sum those durations using stats, then do some division to get your uptime.  Something like (pseudocode only) ... your base search here | transaction ... | stats sum(duration) as total_uptime [by something?] | eval percent_uptime = total_uptime / (86400*7) that's assuming a 1 week period and that your durations are in seconds (I'm pretty sure that's what pops out of transaction), so 86400 seconds per day times 7 days. Give that a try, and if you have any further problems or questions about this, reply back with a bit more information (like the search involved, a bit of the sample output from that search, etc...) Also if this helps, karma would be appreciated! Happy Splunking, Rich