All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I too would like to understand why we can't add a logo to an app created via the cloud Portal. Has this been looked at as yet as an update as would make life so much easier?
Hello @Richfez  Thank you for the quick response. We have HF configured and is forwarding the data to the IDX. My scenario is, We have installed LDAPSearch app in the HF, We are able to run LDAP s... See more...
Hello @Richfez  Thank you for the quick response. We have HF configured and is forwarding the data to the IDX. My scenario is, We have installed LDAPSearch app in the HF, We are able to run LDAP searches on the HF Web UI. we want to index those output in a an index we have created in the splunk cloud.  I was thinking that ill create the report as search and add the action to log those events, but that did not work as it HF is not able to see the indexes. I am looking for any way to achieve that. Thanks Murali 
Hello, Thank you for the very quick response, much appreciated and helpful.  I have been testing the uptime you provided to obtain the percentage, but am not very good yet at the search creations.  ... See more...
Hello, Thank you for the very quick response, much appreciated and helpful.  I have been testing the uptime you provided to obtain the percentage, but am not very good yet at the search creations.  This is the Search I am using:   index=healthcheck integrationName="Opsgenie Edge Connector - Splunk" alert.message = "STORE*" "entity.source"=Meraki | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true | where closed_txn=0 | fields alert.updatedAt, alert.message, alertAlias, alert.id, action, "alertDetails.Alert Details URL", closed_txn, _time, dv_number, "alert.createdAt" | spath 'alert.createdAt' | eval Created=strftime ('alert.createdAt'/1000,"%m-%d-%Y %I:%M:%S %p") | rename alert.message AS "Branch" | table Created, Branch | sort by Created DESC Can't figure out what the stats sum(duration) should be by.  The goal is to have a percentage of the time between the Create and Close Transaction out of 7 days. Thanks again for all of the help, Tom
OK, so here's the steps: https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/ConfigSCUFCredentials#Install_the_forwarder_credentials_on_individual_forwarders_in_.2Anix It's strange that... See more...
OK, so here's the steps: https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/ConfigSCUFCredentials#Install_the_forwarder_credentials_on_individual_forwarders_in_.2Anix It's strange that those instructions are not to be found at the Splunk Cloud Forwarder manual, but I've sent in some feedback on that and hopefully they'll make the above-linked instructions easier to find.   Happy Splunking, and if you found this useful then karma would be appreciated! -Rich
Assuming DT is the date you want to use and you already have your data in this format, try this | untable DT category state | where state="H" or (category="OVERAL" and state="C") | streamstats windo... See more...
Assuming DT is the date you want to use and you already have your data in this format, try this | untable DT category state | where state="H" or (category="OVERAL" and state="C") | streamstats window=1 current=f values(state) as previous by DT | where state="C" and previous="H" | stats count
In Splunk Cloud - Settings, Server Settings, Email settings. Look for this section. Here's more information at Splunk's docs for PDF generation.
If now is no longer an option, there should be no reason to check for it! Try something like this <eval token="latest_Time">relative_time($time.latest$, $timedrop$)</eval> Don't forget to change t... See more...
If now is no longer an option, there should be no reason to check for it! Try something like this <eval token="latest_Time">relative_time($time.latest$, $timedrop$)</eval> Don't forget to change the default for the dropdown to something other than now
There were a few errors, but this should work.  Note I broke out the comparison_date calculation from the eval where you decide if they need to reset or not, to a) make it more clear and b) so you ca... See more...
There were a few errors, but this should work.  Note I broke out the comparison_date calculation from the eval where you decide if they need to reset or not, to a) make it more clear and b) so you can see the dates/strings it's comparing with. | makeresults format="CSV" data="date 2024-05-09T08:05:00 2024-02-09T08:05:00" | eval epoch_lastLogonTimestamp_date = strptime(date, "%Y-%m-%dT%H:%M:%S") | eval last_logon_total = strftime(epoch_lastLogonTimestamp_date, "%d/%m/%Y") | eval comparison_date = relative_time(now(),"-61d@d") | eval action = if(epoch_lastLogonTimestamp_date <= comparison_date, "reset password", "no change needed")   I think the biggest issue was that the epoch date is the only one you need.  Do your math on it, work with it.  If you need to see it in a more human readable version, you can convert it back at the end.  In this case, 'last_logon_total' is simply unused after you build it.   Happy splunking, and if this helped karma would be appreciated! -Rich
How do you determine what the day is because in your example DT doesn't always equate to the date shown in _time?
Hello All, I have an LDAPsearch app installed in one of the onprem Heavy Forwarders and I need to index the search out put into an index we have created. Our IDX and SH are on splunk cloud. Would a... See more...
Hello All, I have an LDAPsearch app installed in one of the onprem Heavy Forwarders and I need to index the search out put into an index we have created. Our IDX and SH are on splunk cloud. Would appreciate all the suggestions Thanks in advance. Murali
gcusello, what I meant to do is using: index=xta then I want to pull fma_id, Org_unify, description, AH tag and the ISO name. Using fma_id that is pulled from xta example index=*OS-001* and report ... See more...
gcusello, what I meant to do is using: index=xta then I want to pull fma_id, Org_unify, description, AH tag and the ISO name. Using fma_id that is pulled from xta example index=*OS-001* and report all index that have that fma_id as part of the string then run a count on all hosts/systems that have the UF installed and register to that fma_id by type (windows vs Linux) then I have to check the available data sets to look for the host in by checking the hosts under fma ID exist in AD, Defender, Big Fix, and Tenable and when they were last detected. Thanks a lot for your help on this matter.
I think we're missing some details to be able to provide *the answer* for you, but I can certainly point you in the right direction! You have a transaction, so you have duration for each transaction... See more...
I think we're missing some details to be able to provide *the answer* for you, but I can certainly point you in the right direction! You have a transaction, so you have duration for each transaction. So you'll want to sum those durations using stats, then do some division to get your uptime.  Something like (pseudocode only) ... your base search here | transaction ... | stats sum(duration) as total_uptime [by something?] | eval percent_uptime = total_uptime / (86400*7) that's assuming a 1 week period and that your durations are in seconds (I'm pretty sure that's what pops out of transaction), so 86400 seconds per day times 7 days. Give that a try, and if you have any further problems or questions about this, reply back with a bit more information (like the search involved, a bit of the sample output from that search, etc...) Also if this helps, karma would be appreciated! Happy Splunking, Rich
This example using makeresults can show you a case statement | makeresults count=10 | eval _raw="message success job, processed job, completed job, failed job," | multikv forceheader=1 | eval st... See more...
This example using makeresults can show you a case statement | makeresults count=10 | eval _raw="message success job, processed job, completed job, failed job," | multikv forceheader=1 | eval status = case( like(message, "%success%") OR like(message, "%processed%") OR like(message, "%completed%"), "success", like(message, "%failed%") OR like(message, "%failure%"), "failure", true(), "other" ) | table _time, message, status
I have a status field with two string values Dropped and Notdropped. If the value comes as Dropped, I want to show the background color as Green and if the value comes as Notdropped color should be g... See more...
I have a status field with two string values Dropped and Notdropped. If the value comes as Dropped, I want to show the background color as Green and if the value comes as Notdropped color should be green.  How can i achive in single card value in splunk studio. 
Hello, If possible, I need help on getting a Percentage of Uptime for a Transaction overtime.  I have a Search created that creates a Transaction, it's based on: startwith=Create endswith=Close k... See more...
Hello, If possible, I need help on getting a Percentage of Uptime for a Transaction overtime.  I have a Search created that creates a Transaction, it's based on: startwith=Create endswith=Close keepevicted=true The events are coming from OpsGenie for when an alert is created and closed.  Is there anyway to take the time from either between Create/Close or Close/Create for a one week timeframe to obtain the percentage? Thanks for all of the help, let me know if any more details are needed. Tom    
Afternoon All i'd like some help please with some SPL logic that i just cant crack   I have data on some user in our Active Directory system and i am trying to: create a new column with actio... See more...
Afternoon All i'd like some help please with some SPL logic that i just cant crack   I have data on some user in our Active Directory system and i am trying to: create a new column with actions identify those who have no logged in for more than 61 days and is so the action should return "reset password" here's the part that i am having an issue with below. the first two lines are working as expected returning last_logon_total  day, month, year i have a new field i created called 'action' that i want to return a value in of those users who have not logged in for more than 61 days.. but i cant get the spl right. | eval epoch_lastLogonTimestamp_date = strptime(lastLogonTimestamp, "%Y-%m-%dT%H:%M:%S") | eval last_logon_total = strftime(epoch_lastLogonTimestamp_date, "%d/%m/%Y") | eval action = if(last_logon_total = relative_time(), "-61d@d", "reset password")   any ideas ?   Thanks Paula    
the mvjoin line was only one way I tried to add all the host together to get it to look like (host1,host2,host3) are not coming in on the description. I am having difficult getting it to be side by s... See more...
the mvjoin line was only one way I tried to add all the host together to get it to look like (host1,host2,host3) are not coming in on the description. I am having difficult getting it to be side by side any of the results separated by a comma. that is why I am on here. I have looked thru so much documentation and cannot get my results for the hosts to go into one event that looks like (host1, host2, host3).  You stated to use a foreach command. I am not quite sure how that would look to get it to put the host in one event side by side.
The information you seek is available on splunkbase at https://splunkbase.splunk.com/app/7245.  Splunk AI Assistance is in preview so you must request access before you can download it.  Details are ... See more...
The information you seek is available on splunkbase at https://splunkbase.splunk.com/app/7245.  Splunk AI Assistance is in preview so you must request access before you can download it.  Details are on the splunkbase page.
Hi All, I have a field in my data called 'message' ,which contain information about status of the field.I'd like categorizes files either success or failure files based on content of the field.For e... See more...
Hi All, I have a field in my data called 'message' ,which contain information about status of the field.I'd like categorizes files either success or failure files based on content of the field.For example the message contain multiple values like(success,processed,completed) then i want to label the corresponding file as success,if it contains like(failed,failure) i want to label as failure file.How to implement this using SPL query.Below query i tried but i am not getting properly.     index=mulesoft environment=DEV applicationName="Test" |stats values(content.FileName) as Filename1 values(content.ErrorMsg) as errormsg values(content.Error) as error values(message) as message values(priority) as priority min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time BY correlationId | eval SuccessFileName=case(match(message, "File put Succesfully*|Successfully created file data*|Archive file processed successfully*|Summary of all Batch*|processed successfully for file name*|SUCCESS") AND not match(priority,"ERROR|WARN"),FileName1,1=1,null()) | eval FailureFileName=case(match(message,"Failed to process file:"),FileName1,1=1,null()) |table SuccessFileName FailureFileName Response correlationId      
I should have said change to windows path as the command I gave is for Linux