All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello folks, My organization is struggling with ingesting the Cisco Firepower audit (sys)logs into Splunk, we've been able to successfully ingest all the other sources. With the Firepowers only offe... See more...
Hello folks, My organization is struggling with ingesting the Cisco Firepower audit (sys)logs into Splunk, we've been able to successfully ingest all the other sources. With the Firepowers only offering up 514udp which is unavailable according to Splunk, or a HEC configuration without tokens so Splunk is (would?) drop the events our option appear limited. Has anyone else come across this issue and solved it?   
Hello, we found solution, there was metadata index source key that was possible to use. Thanks for your help guys.
I have similar issues poppping up as of late. But how does one isolate the affected forwarder? The error message reads Forwarder Ingestion Latency   Root Cause(s): Indicator 'ingestion_latency... See more...
I have similar issues poppping up as of late. But how does one isolate the affected forwarder? The error message reads Forwarder Ingestion Latency   Root Cause(s): Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 89. Message from <UUID>:<ip-addrs>:54246 Unhealthy Instances: indexer1 indexer2   The "message from" section just lists the UUID, an IP adress and a port. Which part here would help me find the actual forwarder? The UUID does not match any "Client name" under forwarder management on the deployment server. The IP adress does not match a server on which I have a forwarder installed. One or a few of the indexers are listed as "unhealthy instances" each time. But the actual error sounds like it lives in the forwarder end and not on the indexer. With the available information in this warning/error. How can I figure out which forwarder is either experiencing latency issues OR need to have that log file mentioned flushed.
So, how does one isolate the affected forwarder? The error message reads Forwarder Ingestion Latency   Root Cause(s): Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. Th... See more...
So, how does one isolate the affected forwarder? The error message reads Forwarder Ingestion Latency   Root Cause(s): Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 89. Message from <UUID>:<ip-addrs>:54246 Unhealthy Instances: indexer1 indexer2   The "message from" section just lists the UUID, an IP adress and a port. Which part here would help me find the actual forwarder? The UUID does not match any "Client name" under forwarder management on the deployment server. The IP adress does not match a server on which I have a forwarder installed. One or a few of the indexers are listed as "unhealthy instances" each time. But the actual error sounds like it lives in the forwarder end and not on the indexer. With the available information in this warning/error. How can I figure out which forwarder is either experiencing latency issues OR need to have that log file mentioned flushed.  
If you're applying those props/transforms to the UF then that would explain why it isnt taking effect - the parsing is not carried out on the UF (except specifically enabled!) so they will need apply... See more...
If you're applying those props/transforms to the UF then that would explain why it isnt taking effect - the parsing is not carried out on the UF (except specifically enabled!) so they will need applying on the HF, unless you're able to set the correct index values on the secondary environment UFs.  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi To ensure that the app's webUI components are reloaded in the UI after an upgrade you need to make sure that both the version number in the app.conf is updated (which I suspect you have done if y... See more...
Hi To ensure that the app's webUI components are reloaded in the UI after an upgrade you need to make sure that both the version number in the app.conf is updated (which I suspect you have done if you've updated it on Splunkbase!) but also the build=<numeric> in the [launcher] in app.conf - this will force the cache to be cleared for the app and I've found previously that this can help with cached pages such setup.xml and any associated Javascript. Example app.conf snippet: [launcher] version = 2.1.0 build = 210 Browsers can still cache aggressively, so instruct users to do a hard refresh (Ctrl+Shift+R or Cmd+Shift+R) after upgrade if the above does not take effect. For more info on build key in app.conf check out the docs at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Appconf#:~:text=performed.%0A*%20Default%3A%20false-,build%20%3D%20%3Cinteger%3E,-*%20Required.%0A*%20Must%20be Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Was there any answer to this? I have the same CVE pop up on my scan and want to find a fix/workaround for it. thanks!
Hello @extalt, Thank you for asking your question on the Community. I'm not a product expert. Let's see if the community can jump in and help.  If you don't get a reply soon, you can each out to... See more...
Hello @extalt, Thank you for asking your question on the Community. I'm not a product expert. Let's see if the community can jump in and help.  If you don't get a reply soon, you can each out to AppDynamics Support. https://community.splunk.com/t5/AppDynamics-Knowledge-Base/How-do-I-open-a-case-with-AppDynamics-Support/ta-p/730735  
I have found that just replacing width with max-width in the CSS Style works also, i.e. <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #Pa... See more...
I have found that just replacing width with max-width in the CSS Style works also, i.e. <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #Panel1{max-width:15% !important;} #Panel2{max-width:85% !important;} </style> </html> </panel> <panel id="Panel1">....</panel> <panel id="Panel2">....</panel> </row>
rename fields are not showing with | table command  
Hi Splunk Community, We've developed a new version of our Splunk app and recently published it to Splunkbase. However, we're facing issues when upgrading the app via Manage Apps in Splunk Web. Wh... See more...
Hi Splunk Community, We've developed a new version of our Splunk app and recently published it to Splunkbase. However, we're facing issues when upgrading the app via Manage Apps in Splunk Web. What's happening: After upgrading the app, it stops functioning . We've added new input fields to the setup page (setup.xml), but these changes do not reflect immediately in the UI after upgrade. The new fields only show up after clearing the browser cache and doing a hard reload. Interestingly, if we completely remove the old version of the app and do a fresh install of the new version from Splunkbase, everything works perfectly — the setup UI loads correctly, and logs appear as expected. Any suggestion would be highly appreciated. Thanks  
Try something like this | streamstats max('Properties.Gems.DataSyncsExecutionContext.dataSyncProvidersExecutionGroupULID') as lstUlid | where 'Properties.Gems.DataSyncsExecutionContext.dataSyncProv... See more...
Try something like this | streamstats max('Properties.Gems.DataSyncsExecutionContext.dataSyncProvidersExecutionGroupULID') as lstUlid | where 'Properties.Gems.DataSyncsExecutionContext.dataSyncProvidersExecutionGroupULID' = lstUlid
Hi @shraddha09 , you cannot use max in the eval command, max can be used only in stats or similar streaming commands. try something like this: index=wf_eit_ecio) | rename 'Properties.Gems.Da... See more...
Hi @shraddha09 , you cannot use max in the eval command, max can be used only in stats or similar streaming commands. try something like this: index=wf_eit_ecio) | rename 'Properties.Gems.DataSyncsExecutionContext.dataSyncProvidersExecutionGroupULID' AS GroupULID 'Properties.Gems.DataSyncExecutionContext.DataSyncProviderName' AS ProviderName | where GroupULID = lstUlid | stats max(GroupULID) AS max BY ProviderName | chart count(max) as TotalApiCallsCount BY ProviderName I supposed that lstUlid is a threshold. another thing, don't bring these so long field names, rename them after the main search. Ciao. Giuseppe
Hi Everyone  Query1:  Thanks for suggesting multiple solutions. I am able to fetch the details correctly but i am not able to set the business day as below:  Business day starts at 5 PM (D) and en... See more...
Hi Everyone  Query1:  Thanks for suggesting multiple solutions. I am able to fetch the details correctly but i am not able to set the business day as below:  Business day starts at 5 PM (D) and ends at 5 PM (D+1) I've attached the final set of code. Can you please help to answer this last question to set the business day as 5 PM to 5 PM.  Query2: Also, for Monday , business day should be 5 PM Friday to 5 PM Monday.  is it possible ??  I've attached final source code. Can you please help to provide me the updates required in the source code to solve the above 2 queries.   
Here is my SPL query,   index=wf_eit_ecio) | eval lstUlid=max('Properties.Gems.DataSyncsExecutionContext.dataSyncProvidersExecutionGroupULID') |  where 'Properties.Gems.DataSyncsExecutionContext.da... See more...
Here is my SPL query,   index=wf_eit_ecio) | eval lstUlid=max('Properties.Gems.DataSyncsExecutionContext.dataSyncProvidersExecutionGroupULID') |  where 'Properties.Gems.DataSyncsExecutionContext.dataSyncProvidersExecutionGroupULID' = lstUlid | chart count(Properties.Gems.DataSyncExecutionContext.DataSyncProviderName) as TotalApiCallsCount Over Properties.Gems.DataSyncExecutionContext.DataSyncProviderName lstUlid field is generated but its not working in where condition and not filtering the records  
Finally the LB team found a way to make it work with status codes. I accepted the first answer as solution. Thank you Luca
Hi @Splunkduck09  I think if this is the reason then you'd be better taking a feed of the data at ingestion time rather than extracting it out from Splunk. You could use Ingest Actions to write to ... See more...
Hi @Splunkduck09  I think if this is the reason then you'd be better taking a feed of the data at ingestion time rather than extracting it out from Splunk. You could use Ingest Actions to write to S3 or NFS which might be the easiest approach - Check out https://docs.splunk.com/Documentation/Splunk/9.4.1/Data/DataIngest#Create_an_NFS_file_system_destination Theres also a lunch & learn video at https://www.youtube.com/watch?v=9W_4ERKTx94 which gives an overview of Ingest Actions which might help you too.  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Splunkduck09  You can use the "dump" Splunk command to export from the proprietary Splunk bucket format back in to Plain text using the following example: Export all events from index "bigdata"... See more...
Hi @Splunkduck09  You can use the "dump" Splunk command to export from the proprietary Splunk bucket format back in to Plain text using the following example: Export all events from index "bigdata" to the location "YYYYmmdd/HH/host" at "$SPLUNK_HOME/var/run/splunk/dispatch/<sid>/dump/" directory on local disk with "MyExport" as the prefix of export filenames. Partitioning of the export data is achieved by eval preceding the dump command. index=bigdata | eval _dstpath=strftime(_time, "%Y%m%d/%H") + "/" + host | dump basefilename=MyExport fields="_time, host, source, sourcetype" For more info check out https://docs.splunk.com/Documentation/Splunk/9.4.1/SearchReference/Dump You can also dump the data using the CLI instead of SPL if required - check out https://docs.splunk.com/Documentation/Splunk/9.4.0/Search/Exportdatausingdumpcommand Once this is done you will be able to open the resulting file in a standard text editor.  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Kiran, Thanks for the replying. Use case:  I would like to backup indexed Splunk logs into a third party backup/restore system say on daily or weekly basis and it is required to be in human read... See more...
Hi Kiran, Thanks for the replying. Use case:  I would like to backup indexed Splunk logs into a third party backup/restore system say on daily or weekly basis and it is required to be in human readable format. So they can be retrieved anytime in future within the backup/restore system and can be read with a notepad or other similar tool if needed   Would love to know your thoughts on above? 
Hi @osh55 , ok, please try this: index=sample1 ((sourcetype=x host=host1) OR sourcetype=y) | eval caller_party=if(sourcetype=x, substr(caller, 2), caller_party) | stats count(eval(sourcetype=... See more...
Hi @osh55 , ok, please try this: index=sample1 ((sourcetype=x host=host1) OR sourcetype=y) | eval caller_party=if(sourcetype=x, substr(caller, 2), caller_party) | stats count(eval(sourcetype=x)) AS all_calls count(eval(sourcetype=y)) AS messagebank_calls BY caller | search all_calls=* See my approach and adapt it to your use case. Ciao. Giuseppe