All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try below steps Choose a single format and start the uploading process. If you choose PEM, follow the steps mentioned in Scenario 1 or directly move to Scenario 2 if you have selected the PKCS#7 for... See more...
Try below steps Choose a single format and start the uploading process. If you choose PEM, follow the steps mentioned in Scenario 1 or directly move to Scenario 2 if you have selected the PKCS#7 format.  Scenario1 Step1:  Import the Root and Intermediate Certificates (CA bundle) by using the command given below: keytool -import -trustcacerts -alias ca -file file.ca-bundle -keystore mykeystore.jks Note: The alias name and keystore alias names should not be the same. Step 2: Utilize the below-written code to upload the files after importing the SSL certificate: keytool -import -trustcacerts -alias myalias -file file.crt -keystore mykeystore.jks Note: The alias and keystore alias names should be the same. Scenario2: Step 1: Use the command given below to upload every single file in one go: keytool -import -trustcacerts -alias myalias -file file.p7b -keystore mykeystore.jks The alias attribute must match the alias set for your keystore. Note: You will be prompted to enter the keystore password and ensure that the attribute – myalias, matches the alias set for your keystore. (If you have doubts, use this command: “keytool -list -v -keystore mykeystore.jks” to see the alias name.) Check this one https://cheapsslweb.com/resources/how-to-install-an-ssl-certificate-on-glassfish if you still facing the issues 
@sureshkumaar  When the thruput limit is reached, monitoring pauses and the following events are recorded in splunkd.log Run this command:- /opt/splunkforwarder/bin/splunk btool inputs list --debu... See more...
@sureshkumaar  When the thruput limit is reached, monitoring pauses and the following events are recorded in splunkd.log Run this command:- /opt/splunkforwarder/bin/splunk btool inputs list --debug INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying... To verify how often the forwarder is hitting this limit, check the forwarder's metrics.log. (Look for this on the forwarder because metrics.log is not forwarded by default on universal and light forwarders.) cd /opt/splunkforwarder/var/log/splunk grep "name=thruput" metrics.log Example: The instantaneous_kbps and average_kbps are always under 256KBps. 11-19-2013 07:36:01.398 -0600 INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=251.790673, instantaneous_eps=3.934229, average_kbps=110.691774, total_k_processed=101429722, kb=7808.000000, ev=122 Solution Create a custom limits.conf with a higher limit or no limit. The configuration can be in system/local or in an app that will have precedence on the existing limit. Example: Configure in a dedicated app, in /opt/splunkforwarder/etc/apps/Gofaster/local/limits.conf Double the thruput limit, from 256 to 512 KBps: [thruput] maxKBps = 512 Or for unlimited thruput: [thruput] maxKBps = 0 Unlimited speed can cause higher resource usage on the forwarder. Keep a limit if you need to control the monitoring and network usage. Restart to apply. Verify the result of the configuration with btool. Later, verify in metrics.log that the forwarder is not reaching the new limit constantly.
I have both Chinese and English field names from the Windows event log, and I would like to use field aliases so that the English field names can search for both the Chinese and English fields. Howev... See more...
I have both Chinese and English field names from the Windows event log, and I would like to use field aliases so that the English field names can search for both the Chinese and English fields. However, I found that some fields, such as src and src_port, are missing for the Chinese field. This issue might be related to the fact that field aliases take effect only after EXTRACT, REPORT, or LOOKUP. Does anyone have suggestions on how to effectively convert between Chinese and English field names? (The only thing I can do is on the SH; I am not allowed to change the other instance.)  
I have configured an app and added 7 different source files in a single inputs.conf with the same index name and sourcetype name Parent directory name is same for all 7 source files and sub director... See more...
I have configured an app and added 7 different source files in a single inputs.conf with the same index name and sourcetype name Parent directory name is same for all 7 source files and sub directory name changes. The log file resides in has an extension of *.log. But i am able to get only one log file sending events to Splunk. sample inputs.conf provided below. [monitor:///ABC-DEF50/Platform/*.log] disabled = false index = os_linux sourcetype = nix:messages crcSalt = <SOURCE> Last week's data for the remaining 6 source files i was able to see in the Splunk after 2-3 days only. I checked and could see delay in indexing is happening. How to fix this? kindly help
Please expand on what you mean for each scenario when you are getting results and when you are not getting results? If there are multiple commands in your SPL, try removing them one at a time until ... See more...
Please expand on what you mean for each scenario when you are getting results and when you are not getting results? If there are multiple commands in your SPL, try removing them one at a time until the situation changes, so you can identify the step which is causing the issue.
Hi I have this search | `es_notable_events` | search timeDiff_type=current | timechart minspan=30m sum(count) as count by security_domain when I am searching macro  `es_notable_events` it is no... See more...
Hi I have this search | `es_notable_events` | search timeDiff_type=current | timechart minspan=30m sum(count) as count by security_domain when I am searching macro  `es_notable_events` it is not showing any result for all apps and any owner but the search is giving results and when running macro it is also showing results.
i managed to fix this via Splunk App on Splunk SOAR , i ran a Custom SPL query that uses "outputlookup" to update an existing lookup file .   
One of the many things that Studio doesn't support that Classic does. If you can, you should use Classic dashboards until such time that Studio is brought up to the same level.
The issue is: We are Creating Maintenance Window on the Basis on Entities( Servers). But still we are Receiving Incidents e.g. Server Reboot, Server Stopped during the Activity Window. We do not ne... See more...
The issue is: We are Creating Maintenance Window on the Basis on Entities( Servers). But still we are Receiving Incidents e.g. Server Reboot, Server Stopped during the Activity Window. We do not need actions rules which are defined in NEAP for Correlation Searches such as SNOW incidents or Email Notifications for the entities added in the Maintenance Window during the defined time frame of the Maintenance Window. Can you Please help us with this issue. Attaching Snapshots for your reference. Regards,
hello thanks for the reply.  correlation search is :   index=pg_idx_windows_data source=XmlWinEventLog:System sourcetype=XmlWinEventLog Name='Microsoft-Windows-Kernel-Boot' | join host [ search i... See more...
hello thanks for the reply.  correlation search is :   index=pg_idx_windows_data source=XmlWinEventLog:System sourcetype=XmlWinEventLog Name='Microsoft-Windows-Kernel-Boot' | join host [ search index=pg_idx_windows_data source=operatingsystem sourcetype=WinHostMon] | eval Server = upper(host) |join Server [ inputlookup pg_ld_production_servers | rename Site AS Plant | fields Plant Server SNOW_Location_Name Disable_Alert facility_name site_id wave snow_business_service snow_service_offering SNOW_assignment_group] | search Disable_Alert = 0 | fields - Disable_Alert | dedup host | eval Reboot_Time_EST = strftime(_time, "%Y-%m-%d %I:%M:%S:%p") | eval Reboot_Site_Time = substr(LastBootUpTime,1,4) + "-" + substr(LastBootUpTime,5,2) + "-" + substr(LastBootUpTime,7,2) + " " + substr(LastBootUpTime,9,2) + ":" + substr(LastBootUpTime,11,2) + ":" + substr(LastBootUpTime,13,2) | table Plant Server Type Reboot_Time_EST Reboot_Site_Time | sort by Site Server | eval itsiSeverity = 5 | eval itsiStatus = 2 | eval itsiTower = "MFG" | eval itsiAlert = "Proficy Server Reboot Alert in last 15 minutes" | rename SNOW_Location_Name AS Location | eval n=now() | eval url_start_time = n - (1 * 24 * 3600) | eval url_end_time = n + (1 * 24 * 3600) | eval episode_url1 = "https://itsi-pg-mfg-splunk-prod.splunkcloud.com/en-US/app/itsi/itsi_event_management?earliest=".url_start_time."&latest=".url_end_time."&dedup=true&filter=" | eval episode_url1=episode_url1."%5B%7B%22label%22%3A%22Episode%20Id%22%2C%22id%22%3A%22itsi_group_id%22%2C%22value%22%3A" | eval episode_url2="%2C%22text%22%3A" | eval episode_url3="%7D%5D" | fields - n url_start_time url_end_time ``` ELK fields``` | eval alert_name = "PG-GLOBAL-Proficy-Server-Reboot-ALERT" | eval facility_type = "Site" | eval facility_area = "Manufacturing" | eval snow_location = Location | eval application_name = "Proficy Plant Applications" | eval application_id = "CI000008099" | eval name_space = "Manufacturing" | eval snow_configuration_item = "Proficy Plant Applications" | eval snow_incident_type = "Design: Capacity Overutilization" | eval snow_category = "Business Application & Databases" | eval snow_subcategory = "Monitoring" | eval snow_is_cbp_impacted = "Yes" | eval alert_severity = "High" | eval alert_urgency = "High" | eval snow_severity = "1" | eval snow_urgency = "2" | eval snow_impact = "2" | eval primary_property = "hostname" | eval secondary_property = "alert_name" | eval source_system = "splunk" | eval stage = "Prod" | eval snow_contact_type = "Auto Ticket" | eval hostname = Server | eval app_component = "" | eval app_component_ID = "" | eval status = "firing" | eval correlation_rule = "application_id, site_id, facility_name, hostname, infrastructure_type" | eval actionability_type = "incident" | eval alert_actionable = "true" | eval uc_environment = "sandbox"        
Hi folks  I am looking to create a trellis view for pie chart in dashboard but unable to create and ending up below error . Could some one help on this do we have the possibility to create trellis us... See more...
Hi folks  I am looking to create a trellis view for pie chart in dashboard but unable to create and ending up below error . Could some one help on this do we have the possibility to create trellis using pie chart in dashboard.       
Hi @boknows  please try this: [host_override] DEST_KEY = MetaData:Host REGEX = ^\s*([^\s]+) FORMAT = host::$1 to manage the data sources with the space at the beginning of the events. and, as sug... See more...
Hi @boknows  please try this: [host_override] DEST_KEY = MetaData:Host REGEX = ^\s*([^\s]+) FORMAT = host::$1 to manage the data sources with the space at the beginning of the events. and, as suggested by @PickleRick , change the name of the transformation. Ciao. Giuseppe
@Space_Crawler  Review the following attributes in props.conf for the configured sourcetypes. TIME_PREFIX TIME_FORMAT
@Space_Crawler  Ensure that the datetime field in your data is correctly formatted and matches the expected format in Splunk. Verify the "cisco_meraki_appliance_vpn_statuses" input are correctly co... See more...
@Space_Crawler  Ensure that the datetime field in your data is correctly formatted and matches the expected format in Splunk. Verify the "cisco_meraki_appliance_vpn_statuses" input are correctly configured to identify the datetime field Review the configuration files (e.g., props.conf and transforms.conf) to ensure that the datetime field is properly defined.
Hi,   Did you find a fix besides reassinging all the savedsearches without a owner?
@Namdev  Did you complete the following steps? Copy the app to the $SPLUNK_HOME/etc/manager-apps directory on the cluster master node. Push the app from the cluster master to the peer nodes by ru... See more...
@Namdev  Did you complete the following steps? Copy the app to the $SPLUNK_HOME/etc/manager-apps directory on the cluster master node. Push the app from the cluster master to the peer nodes by running the command: /opt/splunk/bin/splunk apply cluster-bundle This updates the cluster configurations on the peer nodes. Verify on the indexers that the app is present in the /opt/splunk/etc/peer-apps directory. If the app is not visible, refer to the official documentation for detailed instructions on how to push the app from the cluster master.  https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Manageappdeployment 
Yes, I  tried using the app option also checked with the _cluster option where I placed the props.conf and transforms.conf files, and distributed them among the peers.
@ww9rivers  The warning message "Pipeline data does not have indexKey" typically indicates that the data being sent to the indexer is missing the necessary index information.  Make sure that the in... See more...
@ww9rivers  The warning message "Pipeline data does not have indexKey" typically indicates that the data being sent to the indexer is missing the necessary index information.  Make sure that the inputs.conf file on your forwarder or heavy forwarder is configured with the correct index. I recommend creating and using a dedicated index instead of the main index, as main is the default index and it's better to keep your data organized.  
@Namdev I suggest starting with a standalone test instance. Create your props.conf and transforms.conf files in either the /opt/splunk/etc/system/local or app/local directory, then restart the Splun... See more...
@Namdev I suggest starting with a standalone test instance. Create your props.conf and transforms.conf files in either the /opt/splunk/etc/system/local or app/local directory, then restart the Splunk instance. After that, open the web interface of the same instance, navigate to the "Add Data" option, and upload your sample log file. Apply your custom sourcetype, "custom_logs," and verify if it's working as expected. If everything looks good, proceed to update the same configuration in the cluster using the cluster master.
so i removed this stanza from default.meta file [savedsearches] owner = admin and it started working how ?