All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. ... See more...
Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. We observed that: • Some KOs throw the message: “This saved search failed to handle removal request” which, as documented, is likely because the KO is defined in both the local and default directories. I have a couple of questions: 1. Can default directory KOs be deleted manually via the filesystem or another method, if not possible through the UI? 2. Is there a safe alternative such as disabling them if deletion is not possible? 3. From a list of KOs I have, how can I programmatically identify which ones reside in the default directory? Also, is there a recommended way to handle overlapping configurations between default and local directories, especially when clean-up or access revocation is needed? Any best practices, scripts, or documentation references would be greatly appreciated!
Hi @Raja_Selvaraj  DATETIME_CONFIG = CURRENT it should work as expected. Can you please run btool command to check if DATETIME_CONFIG taking effect or any config overriding it. splunk btool ... See more...
Hi @Raja_Selvaraj  DATETIME_CONFIG = CURRENT it should work as expected. Can you please run btool command to check if DATETIME_CONFIG taking effect or any config overriding it. splunk btool props list <sourcetype> --debug   above comand should list datetime_config  sample format in props.conf  [<sourcetype>] DATETIME_CONFIG=CURRENT 
Hi, I upgraded Splunk Enterprise from 9.2.3 to 9.4.3, and the KVSotre status is failed. It was migrated successfully to 7.0.14 on one server automatically, but on the second server, migration did... See more...
Hi, I upgraded Splunk Enterprise from 9.2.3 to 9.4.3, and the KVSotre status is failed. It was migrated successfully to 7.0.14 on one server automatically, but on the second server, migration did not start upon upgrade.  Is there a solution to restore the KVstore status and migrate to  7.0.14? It is a standalone server and not part of clustered environment. Some servers also have KVStore status Failed on the version 9.2.3, and I want to change the status before starting to upgrade them to 9.4.3 This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: 127.0.0.1:8191] guid : xzy port : 8191 standalone : 1 status : failed storageEngine : wiredTiger versionUpgradeInProgress : 0
Hi Everyone, Please help me regarding this ask - i need the splunk to show the respective events with the current date instead of the date when the file being placed in the host. For instance, like ... See more...
Hi Everyone, Please help me regarding this ask - i need the splunk to show the respective events with the current date instead of the date when the file being placed in the host. For instance, like the file been placed in server dated 17th july and the events are showing with date 17th july instead i want with the current date.  If the current date 22nd July, then event's date should mentioned as 22nd July and likewise. I have tried with DATETIME_CONFIG = CURRENT and DATETIME_CONFIG = NONE in props.conf but it doesn't work.  
I am having a similar issue however in my case the field always has a suffix of sophos_event_input after the username. Example User Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event... See more...
I am having a similar issue however in my case the field always has a suffix of sophos_event_input after the username. Example User Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would like to change the User field to User Joe-Smith, Adams  Jane-Doe, Smith  Basically I want to get rid of the sophos_event_input suffix. How will I go about this? 
I have a scheduled export report for daily 11PM from my monitoring dashboard. we are in EST time zone and my dashboard is providing the data as expected. But the report pdf in the mail is having data... See more...
I have a scheduled export report for daily 11PM from my monitoring dashboard. we are in EST time zone and my dashboard is providing the data as expected. But the report pdf in the mail is having data based on 'GMT' time zone and its not matching with my dashboard numbers.  For Ex: Expected Time frame is : 00:00 and 23:00 on 7/22 Getting GMT timeframe: 00:00 and 03:00 on 7/23 How to fix this? thanks in advance. 
Hi @kennsche , the role limitations are for all searches and dashboards. So you could create a role with the time window limitation, assigning this role some of your users and enable to use the das... See more...
Hi @kennsche , the role limitations are for all searches and dashboards. So you could create a role with the time window limitation, assigning this role some of your users and enable to use the dashboard only that role. Otherwise, the only solution is to create a list of possible time periods (e.g. 5m, 10m, 15m, 30m, 60m 90m, 120m) and display it in a dropdown list. But this solution is applicable only to a dashboard, not to search. Ciao. Giuseppe
Hi @gcusello thanks for the suggestion! Since I have two tabs, would the role approach be granular enough to limit the search to one tab within the same dashboard? The other tab should not be limi... See more...
Hi @gcusello thanks for the suggestion! Since I have two tabs, would the role approach be granular enough to limit the search to one tab within the same dashboard? The other tab should not be limited. Regards Kenny
That is working perfectly, thank you.
Hi @dsgoody  Firstly, the force_local_processing is only needed if you're running a Universal Forwarder. If its a Heavy Forwarder then you can safely remove this. I think the main issue here is the... See more...
Hi @dsgoody  Firstly, the force_local_processing is only needed if you're running a Universal Forwarder. If its a Heavy Forwarder then you can safely remove this. I think the main issue here is the stanza name - If you're referencing based on source then you need to use something like: # props.conf [source::UDP:<port>] Alternatively you can apply the transforms based on the sourcetype: # props.conf [juniper]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kennsche , in [Settings > User interface > Time ranges] you can define the time ranges that a role finds in the default choices, but you don't limit the possibility to have a larger time period.... See more...
Hi @kennsche , in [Settings > User interface > Time ranges] you can define the time ranges that a role finds in the default choices, but you don't limit the possibility to have a larger time period. So the most efficient way to really limit the time period in searches, is to create a role dedicated to your users and then add a limit in [Settings > Roles > Click on role > Resources > Role search time window limit]. Ciao. Giuseppe
Hi all, I'm having some issues excluding events from our Juniper SRX logs. These events are ingested directly on our Windows Splunk Heavy Forwarders, since these two firewalls are the only syslog in... See more...
Hi all, I'm having some issues excluding events from our Juniper SRX logs. These events are ingested directly on our Windows Splunk Heavy Forwarders, since these two firewalls are the only syslog inputs we have. My current config is as follows; inputs.conf [udp://firewallip:port] connection_host = ip disabled = false index = juniper sourcetype = juniper props.conf [udp://firewallip:port] TRANSFORMS-null=TenantToTrust,TrustToTenant force_local_processing = true transforms.conf [TenantToTrust] REGEX = source-zone-name="tenant".*destination-zone-name="trust" DEST_KEY = queue FORMAT = nullQueue [TrustToTenant] REGEX = source-zone-name="trust".*destination-zone-name="tenant" DEST_KEY = queue FORMAT = nullQueue All we'd like to do is exclude any events where the source and destination zones are both tenant or trust. Any idea where I might be going wrong? Thanks.
Hello everyone, I am using Splunk Studio to create a dashboard with two tabs. Enterprise version 9.4.1. Both tabs are visually identical but in tab 1, I am quering summarized indexes whereas for t... See more...
Hello everyone, I am using Splunk Studio to create a dashboard with two tabs. Enterprise version 9.4.1. Both tabs are visually identical but in tab 1, I am quering summarized indexes whereas for the second tab, I am running normal queries. 'Normal' queries in this tab can be very intensive if a long time range is selected, therefore, I am trying to limit the time selection to a maximum range of two hours. It could be in any day but the duration between start and end time should not exceed 2 hours. (Not latest 2hours) I've tried editing XML by following some AI suggestions. Most suggestions relied on changing the query itself but this was breaking the query and returning no results in the end. Wondering if someone has already any insights how to do this or could guide me in the right direction? Visually it would look like this:
This looks very promising. Thank you for your valued input!
As usual - "it depends". During normal indexing a single pipeline engages 4-6CPU. So if you have a host which does nothing but ingestion processing (a HF), you can relatively harmlessly raise your n... See more...
As usual - "it depends". During normal indexing a single pipeline engages 4-6CPU. So if you have a host which does nothing but ingestion processing (a HF), you can relatively harmlessly raise your number of pipelines and the performance scales quite well (maybe not straight linearliy but not much worse). But on an indexer you have to remember about two things: 1) You're still limited by the fact that you have to write all that to disk at the end of the pipeline (so the performance improvement will be significantly less than linear). 2) Typically indexers mostly do searching after all. So tying CPUs to ingest processing leaves you with much less left resources for searching. That might lead to problems with long running/delayed/skipped searches. So on a modern reasonably sized box, with a typical use case indeed 1 or 2 parallel ingestion pipelines seem the optimal settings. With a slightly atypical architecture (for example a separate HF layer which does the heavy lifting and indexers only receive the parsed data and write it to disks), you could consider raising the parameter more.
Hi @nopera  The docs state "You ONLY need to install these add-ons on FORWARDERS." - The emphasis on ONLY is their wording not mine! However after investigating the contents of the app its clear ther... See more...
Hi @nopera  The docs state "You ONLY need to install these add-ons on FORWARDERS." - The emphasis on ONLY is their wording not mine! However after investigating the contents of the app its clear there are field extractions which need to be on your Searchhead and time/event parsing that needs to be on your indexers (since you are using a Universal Forwarder). Please install the app on your Searchheads and Indexers using your usual app deployment approach and this should provide the relevant field extraction / CIM compliance.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
@nopera  I recommend installing the add-on on both the indexers and the search heads. Indexers are responsible for index-time operations such as parsing, data transformation, and routing. Therefore... See more...
@nopera  I recommend installing the add-on on both the indexers and the search heads. Indexers are responsible for index-time operations such as parsing, data transformation, and routing. Therefore, any add-on containing props.conf or transforms.conf should be deployed to the indexers. Search Heads handle search-time functions, including dashboards, lookups, macros, and CIM mappings. While it's safe to install the add-on on the search heads for search-time functionality, doing so won’t interfere with index-time processes, provided those configurations are also present on the indexers. In general, it's best practice to install the add-on across all relevant tiers, indexers, search heads, and forwarders, and enable only the necessary components on each, depending on the role of the system. https://docs.splunk.com/Documentation/AddOns/released/Overview/Wheretoinstall 
@nopera  If you are using indexers (or a standalone Splunk Enterprise instance), follow these steps: Deploy the TA-Exchange-Mailbox add-on to the indexer at the following path: /opt/splunk/etc/a... See more...
@nopera  If you are using indexers (or a standalone Splunk Enterprise instance), follow these steps: Deploy the TA-Exchange-Mailbox add-on to the indexer at the following path: /opt/splunk/etc/apps/TA-Exchange-Mailbox Restart the Splunk service on the indexer to apply the changes. On the Universal Forwarder, verify that the inputs.conf is correctly configured with the appropriate sourcetype for message tracking logs.
  @kiran_panchavat    I dont use heavy forwarder, i installed universal forwarded to the exchange server, i placed the add-on "TA-Exchange-Mailbox" (server is in mailbox role) to the path "C:\Prog... See more...
  @kiran_panchavat    I dont use heavy forwarder, i installed universal forwarded to the exchange server, i placed the add-on "TA-Exchange-Mailbox" (server is in mailbox role) to the path "C:\Program Files\SplunkUniversalForwarder\etc\apps". Now i am getting the logs but message tracking logs arent parsed correctly.  What should I do now? Example logs below from test env.            
Did you get answer to this ? Can u help with resolution you obtained?