All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Karthikeya , you can override the index name before indexing, not after. If you already indexed data, you cannot move data from an index to another. The only way is reidex all the events, payi... See more...
Hi @Karthikeya , you can override the index name before indexing, not after. If you already indexed data, you cannot move data from an index to another. The only way is reidex all the events, paying twice the license. But, why do you want to change the index? An index isn't a database table, you can have also different and etherogeneous data in the same index remember that an index is usually defined based on two parameters: retention and grant accesses. Ciao. Giuseppe
Hi @mrkhan48 , sorry but I don't understand: if the search you configured for the dropdown has no results, how it can have results after? Anyway, to avoid to start the dropdown, do not set a value ... See more...
Hi @mrkhan48 , sorry but I don't understand: if the search you configured for the dropdown has no results, how it can have results after? Anyway, to avoid to start the dropdown, do not set a value for default or initial value. Ciao. Giuseppe
every command is giving results thats not the problem , problem the macro which is used in this search when i am searching this macro it is giving me no result.  
Working on a use case which entails finding All containers/artifacts that match certain field conditions. The idea is to run an API query against SOAR artifact end point to get all the artifacts a... See more...
Working on a use case which entails finding All containers/artifacts that match certain field conditions. The idea is to run an API query against SOAR artifact end point to get all the artifacts and use the returned artifact fields in further fulfilling automation. A few questions in this respect 1)Does SOAR support API filtering like described in this article - https://medium.com/@lovely_peel_hamster_92/splunk-phantom-rest-api-filters-956a58854bfc Specifically the ability to access child objects in JSON. Documentation does not seem to mention anything about accessing child objects. https://docs.splunk.com/Documentation/Phantom/4.10.7/PlatformAPI/RESTQueryData 3)Also when filters are applied, we seem to lose the ability to restrict the output to a list of fields. It returns the entire JSON while the requirement is for specific fields. What we are actually trying to achieve -  Check for closed SNow INCs and close corresponding Splunk ES notables, and SOAR containers. We have broken down the approach into modules and have the component parts working but the aforementioned filtering is tripping us up - Solving the problem will help us complete the playbook. I also found this and we are attempting something very similar - https://community.splunk.com/t5/Splunk-SOAR/Playbook-run-on-bulk-events/m-p/667251. Again, the filtering is key to completing this. Also, open to suggestions on approach to achieve the above. Thanks! in advance.  
Currently when I open the dashboard for the first time, dropdown starts searching and after a few seconds it returns 'no results found'. I do not want it to search on first load rather I want to show... See more...
Currently when I open the dashboard for the first time, dropdown starts searching and after a few seconds it returns 'no results found'. I do not want it to search on first load rather I want to show a message ' Please select a value in the dropdown'.  Please assist. I am beginner as far Splunk knowledge is concerned.
How to rename index name? We have already an index created which is receiving data. Now we want to change that index name and want all old data to be in this new index if user search with index=new i... See more...
How to rename index name? We have already an index created which is receiving data. Now we want to change that index name and want all old data to be in this new index if user search with index=new index and we want all new logs to be in this new index. indexes.conf [sony_app_JUPITER_prod] homePath   = volume:primary/$_index_name/db coldPath   = volume:primary/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb   want our new index to be sony_app_235678_prod.   gone through this https://community.splunk.com/t5/Getting-Data-In/How-to-rename-an-index/m-p/28599 and we cannot stop Splunk and I didn't understood 3rd point. Can anyone please be more descriptive.   Thanks.
Hello Everyone,   Currently I am using ES 7.1.0 version. Recently but not sure exactly when, Maintenance team upgraded Splunk and ES. I was seperating types of security incidents by creating and u... See more...
Hello Everyone,   Currently I am using ES 7.1.0 version. Recently but not sure exactly when, Maintenance team upgraded Splunk and ES. I was seperating types of security incidents by creating and using Security Domains from security_domains.csv manually . As You know, there are 2 fields in the csv : security_domain and label. For example I've created a new security domain filling both of these columns named "EPP Endpoint", whenever a notable was created, It was creating as "Epp endpoint" which I haven't created like that. I mean I was expecting like "EPP Endpoint - my rule name - Rule" but This wasn't working as I expected.  Somehow without changing anything in security domains, I created another notable with It's domain "EPP Endpoint". Now My new notables are created as I expected before. For example : "EPP Endpoint - my new rule - Rule". I thought maybe It is relevant with the upgrade but I checked release notes and known issues, couldnt find any clue. Also in correlationsearches.csv lookup, I can still see the difference but my new notables are created as I expected.    I wonder Why it is working in 2 ways. It affects my whole architecture because I am fetching these notables into my SOAR and I have to define my correlations in order to determine It's types in SOAR and If I don't define correctly, then this notable won't be classified in a true way. What I mean is "Epp endpoint - rulename - Rule" and "EPP Endpoint - rulename - Rule" differs. I hope someone can help me with this issue. Thanks in advance.    
@Alan_Chan  If field aliases are not sufficient or if the fields are not being extracted properly, you can create Calculated Fields on the SH. Navigate to Settings > Fields > Calculated Fields. Us... See more...
@Alan_Chan  If field aliases are not sufficient or if the fields are not being extracted properly, you can create Calculated Fields on the SH. Navigate to Settings > Fields > Calculated Fields. Use a simple eval expression to create a new field that maps to both the English and Chinese field names: eval src=coalesce(src, "來源網路位址") eval src_port=coalesce(src_port, "來源連接埠") https://docs.splunk.com/Documentation/Splunk/9.4.0/Knowledge/CreatecalculatedfieldswithSplunkWeb  https://docs.splunk.com/Documentation/Splunk/9.4.0/Knowledge/definecalcfields 
Try below steps Choose a single format and start the uploading process. If you choose PEM, follow the steps mentioned in Scenario 1 or directly move to Scenario 2 if you have selected the PKCS#7 for... See more...
Try below steps Choose a single format and start the uploading process. If you choose PEM, follow the steps mentioned in Scenario 1 or directly move to Scenario 2 if you have selected the PKCS#7 format.  Scenario1 Step1:  Import the Root and Intermediate Certificates (CA bundle) by using the command given below: keytool -import -trustcacerts -alias ca -file file.ca-bundle -keystore mykeystore.jks Note: The alias name and keystore alias names should not be the same. Step 2: Utilize the below-written code to upload the files after importing the SSL certificate: keytool -import -trustcacerts -alias myalias -file file.crt -keystore mykeystore.jks Note: The alias and keystore alias names should be the same. Scenario2: Step 1: Use the command given below to upload every single file in one go: keytool -import -trustcacerts -alias myalias -file file.p7b -keystore mykeystore.jks The alias attribute must match the alias set for your keystore. Note: You will be prompted to enter the keystore password and ensure that the attribute – myalias, matches the alias set for your keystore. (If you have doubts, use this command: “keytool -list -v -keystore mykeystore.jks” to see the alias name.) Check this one https://cheapsslweb.com/resources/how-to-install-an-ssl-certificate-on-glassfish if you still facing the issues 
@sureshkumaar  When the thruput limit is reached, monitoring pauses and the following events are recorded in splunkd.log Run this command:- /opt/splunkforwarder/bin/splunk btool inputs list --debu... See more...
@sureshkumaar  When the thruput limit is reached, monitoring pauses and the following events are recorded in splunkd.log Run this command:- /opt/splunkforwarder/bin/splunk btool inputs list --debug INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying... To verify how often the forwarder is hitting this limit, check the forwarder's metrics.log. (Look for this on the forwarder because metrics.log is not forwarded by default on universal and light forwarders.) cd /opt/splunkforwarder/var/log/splunk grep "name=thruput" metrics.log Example: The instantaneous_kbps and average_kbps are always under 256KBps. 11-19-2013 07:36:01.398 -0600 INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=251.790673, instantaneous_eps=3.934229, average_kbps=110.691774, total_k_processed=101429722, kb=7808.000000, ev=122 Solution Create a custom limits.conf with a higher limit or no limit. The configuration can be in system/local or in an app that will have precedence on the existing limit. Example: Configure in a dedicated app, in /opt/splunkforwarder/etc/apps/Gofaster/local/limits.conf Double the thruput limit, from 256 to 512 KBps: [thruput] maxKBps = 512 Or for unlimited thruput: [thruput] maxKBps = 0 Unlimited speed can cause higher resource usage on the forwarder. Keep a limit if you need to control the monitoring and network usage. Restart to apply. Verify the result of the configuration with btool. Later, verify in metrics.log that the forwarder is not reaching the new limit constantly.
I have both Chinese and English field names from the Windows event log, and I would like to use field aliases so that the English field names can search for both the Chinese and English fields. Howev... See more...
I have both Chinese and English field names from the Windows event log, and I would like to use field aliases so that the English field names can search for both the Chinese and English fields. However, I found that some fields, such as src and src_port, are missing for the Chinese field. This issue might be related to the fact that field aliases take effect only after EXTRACT, REPORT, or LOOKUP. Does anyone have suggestions on how to effectively convert between Chinese and English field names? (The only thing I can do is on the SH; I am not allowed to change the other instance.)  
I have configured an app and added 7 different source files in a single inputs.conf with the same index name and sourcetype name Parent directory name is same for all 7 source files and sub director... See more...
I have configured an app and added 7 different source files in a single inputs.conf with the same index name and sourcetype name Parent directory name is same for all 7 source files and sub directory name changes. The log file resides in has an extension of *.log. But i am able to get only one log file sending events to Splunk. sample inputs.conf provided below. [monitor:///ABC-DEF50/Platform/*.log] disabled = false index = os_linux sourcetype = nix:messages crcSalt = <SOURCE> Last week's data for the remaining 6 source files i was able to see in the Splunk after 2-3 days only. I checked and could see delay in indexing is happening. How to fix this? kindly help
Please expand on what you mean for each scenario when you are getting results and when you are not getting results? If there are multiple commands in your SPL, try removing them one at a time until ... See more...
Please expand on what you mean for each scenario when you are getting results and when you are not getting results? If there are multiple commands in your SPL, try removing them one at a time until the situation changes, so you can identify the step which is causing the issue.
Hi I have this search | `es_notable_events` | search timeDiff_type=current | timechart minspan=30m sum(count) as count by security_domain when I am searching macro  `es_notable_events` it is no... See more...
Hi I have this search | `es_notable_events` | search timeDiff_type=current | timechart minspan=30m sum(count) as count by security_domain when I am searching macro  `es_notable_events` it is not showing any result for all apps and any owner but the search is giving results and when running macro it is also showing results.
i managed to fix this via Splunk App on Splunk SOAR , i ran a Custom SPL query that uses "outputlookup" to update an existing lookup file .   
One of the many things that Studio doesn't support that Classic does. If you can, you should use Classic dashboards until such time that Studio is brought up to the same level.
The issue is: We are Creating Maintenance Window on the Basis on Entities( Servers). But still we are Receiving Incidents e.g. Server Reboot, Server Stopped during the Activity Window. We do not ne... See more...
The issue is: We are Creating Maintenance Window on the Basis on Entities( Servers). But still we are Receiving Incidents e.g. Server Reboot, Server Stopped during the Activity Window. We do not need actions rules which are defined in NEAP for Correlation Searches such as SNOW incidents or Email Notifications for the entities added in the Maintenance Window during the defined time frame of the Maintenance Window. Can you Please help us with this issue. Attaching Snapshots for your reference. Regards,
hello thanks for the reply.  correlation search is :   index=pg_idx_windows_data source=XmlWinEventLog:System sourcetype=XmlWinEventLog Name='Microsoft-Windows-Kernel-Boot' | join host [ search i... See more...
hello thanks for the reply.  correlation search is :   index=pg_idx_windows_data source=XmlWinEventLog:System sourcetype=XmlWinEventLog Name='Microsoft-Windows-Kernel-Boot' | join host [ search index=pg_idx_windows_data source=operatingsystem sourcetype=WinHostMon] | eval Server = upper(host) |join Server [ inputlookup pg_ld_production_servers | rename Site AS Plant | fields Plant Server SNOW_Location_Name Disable_Alert facility_name site_id wave snow_business_service snow_service_offering SNOW_assignment_group] | search Disable_Alert = 0 | fields - Disable_Alert | dedup host | eval Reboot_Time_EST = strftime(_time, "%Y-%m-%d %I:%M:%S:%p") | eval Reboot_Site_Time = substr(LastBootUpTime,1,4) + "-" + substr(LastBootUpTime,5,2) + "-" + substr(LastBootUpTime,7,2) + " " + substr(LastBootUpTime,9,2) + ":" + substr(LastBootUpTime,11,2) + ":" + substr(LastBootUpTime,13,2) | table Plant Server Type Reboot_Time_EST Reboot_Site_Time | sort by Site Server | eval itsiSeverity = 5 | eval itsiStatus = 2 | eval itsiTower = "MFG" | eval itsiAlert = "Proficy Server Reboot Alert in last 15 minutes" | rename SNOW_Location_Name AS Location | eval n=now() | eval url_start_time = n - (1 * 24 * 3600) | eval url_end_time = n + (1 * 24 * 3600) | eval episode_url1 = "https://itsi-pg-mfg-splunk-prod.splunkcloud.com/en-US/app/itsi/itsi_event_management?earliest=".url_start_time."&latest=".url_end_time."&dedup=true&filter=" | eval episode_url1=episode_url1."%5B%7B%22label%22%3A%22Episode%20Id%22%2C%22id%22%3A%22itsi_group_id%22%2C%22value%22%3A" | eval episode_url2="%2C%22text%22%3A" | eval episode_url3="%7D%5D" | fields - n url_start_time url_end_time ``` ELK fields``` | eval alert_name = "PG-GLOBAL-Proficy-Server-Reboot-ALERT" | eval facility_type = "Site" | eval facility_area = "Manufacturing" | eval snow_location = Location | eval application_name = "Proficy Plant Applications" | eval application_id = "CI000008099" | eval name_space = "Manufacturing" | eval snow_configuration_item = "Proficy Plant Applications" | eval snow_incident_type = "Design: Capacity Overutilization" | eval snow_category = "Business Application & Databases" | eval snow_subcategory = "Monitoring" | eval snow_is_cbp_impacted = "Yes" | eval alert_severity = "High" | eval alert_urgency = "High" | eval snow_severity = "1" | eval snow_urgency = "2" | eval snow_impact = "2" | eval primary_property = "hostname" | eval secondary_property = "alert_name" | eval source_system = "splunk" | eval stage = "Prod" | eval snow_contact_type = "Auto Ticket" | eval hostname = Server | eval app_component = "" | eval app_component_ID = "" | eval status = "firing" | eval correlation_rule = "application_id, site_id, facility_name, hostname, infrastructure_type" | eval actionability_type = "incident" | eval alert_actionable = "true" | eval uc_environment = "sandbox"        
Hi folks  I am looking to create a trellis view for pie chart in dashboard but unable to create and ending up below error . Could some one help on this do we have the possibility to create trellis us... See more...
Hi folks  I am looking to create a trellis view for pie chart in dashboard but unable to create and ending up below error . Could some one help on this do we have the possibility to create trellis using pie chart in dashboard.