All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hello thanks for the reply.  correlation search is :   index=pg_idx_windows_data source=XmlWinEventLog:System sourcetype=XmlWinEventLog Name='Microsoft-Windows-Kernel-Boot' | join host [ search i... See more...
hello thanks for the reply.  correlation search is :   index=pg_idx_windows_data source=XmlWinEventLog:System sourcetype=XmlWinEventLog Name='Microsoft-Windows-Kernel-Boot' | join host [ search index=pg_idx_windows_data source=operatingsystem sourcetype=WinHostMon] | eval Server = upper(host) |join Server [ inputlookup pg_ld_production_servers | rename Site AS Plant | fields Plant Server SNOW_Location_Name Disable_Alert facility_name site_id wave snow_business_service snow_service_offering SNOW_assignment_group] | search Disable_Alert = 0 | fields - Disable_Alert | dedup host | eval Reboot_Time_EST = strftime(_time, "%Y-%m-%d %I:%M:%S:%p") | eval Reboot_Site_Time = substr(LastBootUpTime,1,4) + "-" + substr(LastBootUpTime,5,2) + "-" + substr(LastBootUpTime,7,2) + " " + substr(LastBootUpTime,9,2) + ":" + substr(LastBootUpTime,11,2) + ":" + substr(LastBootUpTime,13,2) | table Plant Server Type Reboot_Time_EST Reboot_Site_Time | sort by Site Server | eval itsiSeverity = 5 | eval itsiStatus = 2 | eval itsiTower = "MFG" | eval itsiAlert = "Proficy Server Reboot Alert in last 15 minutes" | rename SNOW_Location_Name AS Location | eval n=now() | eval url_start_time = n - (1 * 24 * 3600) | eval url_end_time = n + (1 * 24 * 3600) | eval episode_url1 = "https://itsi-pg-mfg-splunk-prod.splunkcloud.com/en-US/app/itsi/itsi_event_management?earliest=".url_start_time."&latest=".url_end_time."&dedup=true&filter=" | eval episode_url1=episode_url1."%5B%7B%22label%22%3A%22Episode%20Id%22%2C%22id%22%3A%22itsi_group_id%22%2C%22value%22%3A" | eval episode_url2="%2C%22text%22%3A" | eval episode_url3="%7D%5D" | fields - n url_start_time url_end_time ``` ELK fields``` | eval alert_name = "PG-GLOBAL-Proficy-Server-Reboot-ALERT" | eval facility_type = "Site" | eval facility_area = "Manufacturing" | eval snow_location = Location | eval application_name = "Proficy Plant Applications" | eval application_id = "CI000008099" | eval name_space = "Manufacturing" | eval snow_configuration_item = "Proficy Plant Applications" | eval snow_incident_type = "Design: Capacity Overutilization" | eval snow_category = "Business Application & Databases" | eval snow_subcategory = "Monitoring" | eval snow_is_cbp_impacted = "Yes" | eval alert_severity = "High" | eval alert_urgency = "High" | eval snow_severity = "1" | eval snow_urgency = "2" | eval snow_impact = "2" | eval primary_property = "hostname" | eval secondary_property = "alert_name" | eval source_system = "splunk" | eval stage = "Prod" | eval snow_contact_type = "Auto Ticket" | eval hostname = Server | eval app_component = "" | eval app_component_ID = "" | eval status = "firing" | eval correlation_rule = "application_id, site_id, facility_name, hostname, infrastructure_type" | eval actionability_type = "incident" | eval alert_actionable = "true" | eval uc_environment = "sandbox"        
Hi folks  I am looking to create a trellis view for pie chart in dashboard but unable to create and ending up below error . Could some one help on this do we have the possibility to create trellis us... See more...
Hi folks  I am looking to create a trellis view for pie chart in dashboard but unable to create and ending up below error . Could some one help on this do we have the possibility to create trellis using pie chart in dashboard.       
Hi @boknows  please try this: [host_override] DEST_KEY = MetaData:Host REGEX = ^\s*([^\s]+) FORMAT = host::$1 to manage the data sources with the space at the beginning of the events. and, as sug... See more...
Hi @boknows  please try this: [host_override] DEST_KEY = MetaData:Host REGEX = ^\s*([^\s]+) FORMAT = host::$1 to manage the data sources with the space at the beginning of the events. and, as suggested by @PickleRick , change the name of the transformation. Ciao. Giuseppe
@Space_Crawler  Review the following attributes in props.conf for the configured sourcetypes. TIME_PREFIX TIME_FORMAT
@Space_Crawler  Ensure that the datetime field in your data is correctly formatted and matches the expected format in Splunk. Verify the "cisco_meraki_appliance_vpn_statuses" input are correctly co... See more...
@Space_Crawler  Ensure that the datetime field in your data is correctly formatted and matches the expected format in Splunk. Verify the "cisco_meraki_appliance_vpn_statuses" input are correctly configured to identify the datetime field Review the configuration files (e.g., props.conf and transforms.conf) to ensure that the datetime field is properly defined.
Hi,   Did you find a fix besides reassinging all the savedsearches without a owner?
@Namdev  Did you complete the following steps? Copy the app to the $SPLUNK_HOME/etc/manager-apps directory on the cluster master node. Push the app from the cluster master to the peer nodes by ru... See more...
@Namdev  Did you complete the following steps? Copy the app to the $SPLUNK_HOME/etc/manager-apps directory on the cluster master node. Push the app from the cluster master to the peer nodes by running the command: /opt/splunk/bin/splunk apply cluster-bundle This updates the cluster configurations on the peer nodes. Verify on the indexers that the app is present in the /opt/splunk/etc/peer-apps directory. If the app is not visible, refer to the official documentation for detailed instructions on how to push the app from the cluster master.  https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Manageappdeployment 
Yes, I  tried using the app option also checked with the _cluster option where I placed the props.conf and transforms.conf files, and distributed them among the peers.
@ww9rivers  The warning message "Pipeline data does not have indexKey" typically indicates that the data being sent to the indexer is missing the necessary index information.  Make sure that the in... See more...
@ww9rivers  The warning message "Pipeline data does not have indexKey" typically indicates that the data being sent to the indexer is missing the necessary index information.  Make sure that the inputs.conf file on your forwarder or heavy forwarder is configured with the correct index. I recommend creating and using a dedicated index instead of the main index, as main is the default index and it's better to keep your data organized.  
@Namdev I suggest starting with a standalone test instance. Create your props.conf and transforms.conf files in either the /opt/splunk/etc/system/local or app/local directory, then restart the Splun... See more...
@Namdev I suggest starting with a standalone test instance. Create your props.conf and transforms.conf files in either the /opt/splunk/etc/system/local or app/local directory, then restart the Splunk instance. After that, open the web interface of the same instance, navigate to the "Add Data" option, and upload your sample log file. Apply your custom sourcetype, "custom_logs," and verify if it's working as expected. If everything looks good, proceed to update the same configuration in the cluster using the cluster master.
so i removed this stanza from default.meta file [savedsearches] owner = admin and it started working how ?
@Namdev  Did you deploy the props.conf and transforms.conf files through the cluster manager? You need to create an app on the cluster manager under /opt/splunk/etc/master-apps/ or /opt/splunk/etc/m... See more...
@Namdev  Did you deploy the props.conf and transforms.conf files through the cluster manager? You need to create an app on the cluster manager under /opt/splunk/etc/master-apps/ or /opt/splunk/etc/manager-apps/. Once the app is deployed, it should be propagated to the indexers, appearing under /opt/splunk/etc/peer-apps/ or /opt/splunk/etc/slave-apps/. Please verify if you have correctly created and deployed the app containing the props.conf and transforms.conf configurations. Update common peer configurations and apps - Splunk Documentation  
I am writing a simple TA to read a text file and turn it into a list of JSON events. I am getting a WARN message for each event from the TcpOutputProc process, such as the one below: 02-21-2025 01... See more...
I am writing a simple TA to read a text file and turn it into a list of JSON events. I am getting a WARN message for each event from the TcpOutputProc process, such as the one below: 02-21-2025 01:06:04.001 -0500 WARN TcpOutputProc [2061704 indexerPipe] - Pipeline data does not have indexKey. I removed the rest of the message containing details. It seems that I am missing something simple. I would greatly appreciate some insights/pointers towards debugging this issue. The TA code is here in GitHub: https://github.com/ww9rivers/TA-json-modinput Many thanks in advance!
Hello Team,parsing issue   I have built a distributed Splunk lab using a trial license. The lab consists of three indexers, one cluster manager, one search head, one instance serving as the Monitor... See more...
Hello Team,parsing issue   I have built a distributed Splunk lab using a trial license. The lab consists of three indexers, one cluster manager, one search head, one instance serving as the Monitoring Console (MC), Deployment Server (DS), and License Manager (LM), along with two Universal Forwarders. The forwarder is monitoring the /opt/log/routerlog directory, where I have placed two log files: cisco_ironport_web.log and cisco_ironport_mail.log. The logs are successfully forwarded to the indexers and then to the search head. However, log parsing is not happening as expected. I have applied the same configuration of props.conf and transforms.conf on both the indexer cluster and the search head.   props.conf and transforms.conf file paths : indexer path : /opt/splunk/etc/peer-apps/_cluster/local Search head  path : /opt/splunk/etc/apps/search/local   configuration of props.conf and transforms.conf :   transforms.conf : [extract_fields] REGEX = ^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(?P<src_ip>\d+\.\d+\.\d+\.\d+)\s+(?P<email>\S+@\S+)\s+(?P<domain>\S+)\s+(?P<url>\S+) FORMAT = timestamp::$1 src_ip::$2 email::$3 domain::$4 url::$5   props.conf : [custom_logs] SHOULD_LINEMERGE = false TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19 TRANSFORMS-extract_fields = extract_fields    
Hi, I am seeing the same error message. Has anyone been able to resolve this?
Hello,  I have a fresh install of splunk and Meraki TA App.  I have configured several inputs in the App, however I am seeing a large number of these error messages under various inputs (for exam... See more...
Hello,  I have a fresh install of splunk and Meraki TA App.  I have configured several inputs in the App, however I am seeing a large number of these error messages under various inputs (for example, appliance_vpn_statuses, appliance_vpn_stats) in the following manner: 2025-02-24 03:12:56,971 WARNING pid=50094 tid=MainThread file=cisco_meraki_connect.py:col_eve:597 | Could not identify datetime field for input: cisco_meraki_appliance_vpn_statuses
I have 3 sources that I need to do this for and was able to have 2 come through putting the props in the TA that normalizes the data. The only difference in the 3 data sources is that the data source... See more...
I have 3 sources that I need to do this for and was able to have 2 come through putting the props in the TA that normalizes the data. The only difference in the 3 data sources is that the data source that I cant get to work is there is a space in the logs before its breaks. The regex that I have used for both other data sources is the same one that I I using just with a space prior to it. Not working though.
I got nothing wrong.  Step 2 is not possible.  Yes, you can change the name of the index, but an event cannot be written to a metric index without conversion.  The fact that step 1 works perfectly te... See more...
I got nothing wrong.  Step 2 is not possible.  Yes, you can change the name of the index, but an event cannot be written to a metric index without conversion.  The fact that step 1 works perfectly tells me the data is an event rather than a metric. Splunk has a tendency to overload terms.  in this case, "metric" can refer to a numeric value in an event or it can refer to a specific format of data (also numeric) that only a metric index can store.  it's the format (or lack of it) that's causing the error message.
color isn't listed in the final table command of your search so it doesn't appear in the final result set. If you want a field that isn't displayed in your table, start the field name with and under... See more...
color isn't listed in the final table command of your search so it doesn't appear in the final result set. If you want a field that isn't displayed in your table, start the field name with and underscore e.g. _color and reference that in the done handler. Try something like this   <table> <title>TABLESPACE_FREESPACE</title> <search> <query> index="database" source="tables" | eval BYTES_FREE = replace(BYTES_FREE, ",", "") | eval BYTES_USED = replace(BYTES_USED, ",", "") | eval GB_USED = BYTES_USED / (1024 * 1024 * 1024) | eval GB_FREE = BYTES_FREE / (1024 * 1024 * 1024) | eval GB_USED = floor(GB_USED * 100) / 100 | eval GB_FREE = floor(GB_FREE * 100) / 100 | eval CALCULATED_PERCENT_FREE = (GB_FREE / (GB_USED + GB_FREE)) * 100 | eval CALCULATED_PERCENT_FREE = floor(CALCULATED_PERCENT_FREE * 10) / 10 | eval _color = if(CALCULATED_PERCENT_FREE >= PERCENT_FREE, "#00FF00", "#FF0000") | rename TABLESPACE_NAME as "Tablespace", GB_USED as "Used Space (Gb)", GB_FREE as "Free Space (Gb)", PERCENT_FREE as "Free Space (%)" | table "Tablespace" "Used Space (Gb)" "Free Space (Gb)" "Free Space (%)" _color </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <set token="color">$result._color$</set> </done> </search> <option name="count">21</option> <option name="drilldown">none</option> <option name="wrap">false</option> <format type="color" field="Free Space (%)"> <colorPalette type="expression">$color|s$</colorPalette> </format> </table>      
Hi @tscroggins    Sorry for the late response.    I have the following version - 5.5.0   I also tried a private incognito browser session and I got the same problem that I cannot even choose a ... See more...
Hi @tscroggins    Sorry for the late response.    I have the following version - 5.5.0   I also tried a private incognito browser session and I got the same problem that I cannot even choose a app when trying to publish a model. So I really dont know how to solve that.    I just can open the model in search and then try to apply it on new data, but I do not know if this is the same.