All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Sending Email as an action for an Alert and includes the result as table. _time field is one of the columns for this table and is showing this type of format "DDD MMM 24hh:mm:ss YYYY". Op... See more...
Sending Email as an action for an Alert and includes the result as table. _time field is one of the columns for this table and is showing this type of format "DDD MMM 24hh:mm:ss YYYY". Opening the Alert in Search shows a different format. "YYYY-MM-DD 24hh:mm:ss.sss" Is there a way to format _time field in the email's inline table?  
In a SmartStore configuration, there are a significant number of deletes/writes as buckets are evicted and copied to the indexer's volume.  To improve performance, SSD disks are being used.  In this ... See more...
In a SmartStore configuration, there are a significant number of deletes/writes as buckets are evicted and copied to the indexer's volume.  To improve performance, SSD disks are being used.  In this case, how often should one run the TRIM command to help with SSD garbage collection?
I have the following SPL search.   index="cloudflare" | top ClientRequestPath by ClientRequestHost | eval percent = round(percent,2) | rename count as "Events", ClientRequestPath as "Path", percent... See more...
I have the following SPL search.   index="cloudflare" | top ClientRequestPath by ClientRequestHost | eval percent = round(percent,2) | rename count as "Events", ClientRequestPath as "Path", percent as "%"    Wich give me this result. I also need to group it by 10m time range and calculate the difference in percents between 2 previous time ranges for every line. Help me figure out how do that, thx.
I Am having Hf and it is configured to send data via sourcetype A After sometime it stops sending data to A Then i move the data to diffrent HF in sourcetype : test ( to test if it is working)  th... See more...
I Am having Hf and it is configured to send data via sourcetype A After sometime it stops sending data to A Then i move the data to diffrent HF in sourcetype : test ( to test if it is working)  then from new HF I am routing the data to Source type A itself Will it reingest the data or checkpoint from the data it is left off, will it ignore the data which was sent to sourcetype :test?? need help and clear explanation
I am using below query to get the index sizes and consumed space and frozenTimePeriodInSecs details. | rest /services/data/indexes splunk_server="ABC" | stats min(minTime) as MINUTC max(maxTime) as... See more...
I am using below query to get the index sizes and consumed space and frozenTimePeriodInSecs details. | rest /services/data/indexes splunk_server="ABC" | stats min(minTime) as MINUTC max(maxTime) as MAXUTC max(totalEventCount) as MaxEvents max(currentDBSizeMB) as CurrentMB max(maxTotalDataSizeMB) as MaxMB max(frozenTimePeriodInSecs) as frozenTimePeriodInSecs by title | eval MBDiff=MaxMB-CurrentMB | eval MINTIME=strptime(MINUTC,"%FT%T%z") | eval MAXTIME=strptime(MAXUTC,"%FT%T%z") | eval MINUTC=strftime(MINTIME,"%F %T") | eval MAXUTC=strftime(MAXTIME,"%F %T") | eval DAYS_AGO=round((MAXTIME-MINTIME)/86400,2) | eval YRS_AGO=round(DAYS_AGO/365.2425,2) | eval frozenTimePeriodInDAYS=round(frozenTimePeriodInSecs/86400,2) | eval DAYS_LEFT=frozenTimePeriodInDAYS-DAYS_AGO | rename frozenTimePeriodInDAYS as frznTimeDAYS | table title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff XYZ 24-06-2018 01:24 10-02-2024 21:11 62 -1995.87 2057.87 5.63 13115066 6463 8192 1729   For index 'XYZ' I can see frozenTimePeriod are showing 62 days so as per the set condition it should just show last 2 months of data but my MINTIME is still showing very old date as '24-06-2018 01:24'. When I checked the event counts in Splunk for older than 62 days then it shows very few counts compare to past 62 days events counts. (Current events counts are very high) So why still these older events are showing in Splunk and also why very few not all). I want to understand this scenario to increase the frozentime period.
Hi team , 1 .How to monitor the Splunk DB logs, already installed and configured. 2. How to 2 queries of splunk db ingest into single index?  
I have an Alert that when triggered sends an email with a .PDF attachment of the Column Chart. I am trying to remove the legend truncation. In the UI ('Format' with the paintbrush icon) there is no... See more...
I have an Alert that when triggered sends an email with a .PDF attachment of the Column Chart. I am trying to remove the legend truncation. In the UI ('Format' with the paintbrush icon) there is no option for: (ellipsisNone) only end, middle, start. I then tried advance edit and get this error when trying to update this attribute to ellipsisNone display.visualizations.charting.legend.labelStyle.overflowMode Value of argument 'display.visualizations.charting.legend.labelStyle.overflowMode' must be either 'ellipsisEnd', 'ellipsisMiddle', or 'ellipsisStart'   I cannot find a way to edit an Alerts visualization .html code to do this manually ?
Hello,  I created a dashbord with a text input, the token is then passed to a panel that executes this command: <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(techn... See more...
Hello,  I created a dashbord with a text input, the token is then passed to a panel that executes this command: <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab $technique_id$</query> the purpose of this command is to trigger a custom command with this config: [mitrepurplelab] filename = mitrepurplelab.py enableheader = true outputheader = true requires_srinfo = true chunked = true streaming = true   the mitrepurplelab.py script is then triggered, here is its code: import sys import requests import logging logging.basicConfig(filename='mitrepurplelab.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') def main(): logging.debug(f "Arguments received: {sys.argv}") if len(sys.argv) != 2: logging.error("Incorrect usage: python script.py <technique_id>") print("Usage: python script.py <technique_id>") return technique_id = sys.argv[1] url = "http://192.168.142.146:5000/api/mitre_attack_execution" # Make sure your JWT token is complete and correctly formatted token = "token headers = { "Authorization": f "Bearer {token}" } params = { "technique_id": technique_id } response = requests.post(url, headers=headers, params=params) if response.status_code == 200: print("Request successful!") print("Server response:") print(response.json()) else: logging.error(f "Error: {response.status_code}, Response body: {response.text}") print(f "Error: {response.status_code}, Response body: {response.text}") if __name__ == "__main__": main()   the script works well when run by hand, for example : python3 bin/mitrepurplelab.py T1059.003 but when I execute it via the dashboard I get this error: in the panel search.log I get this:   02-09-2024 10:37:46.075 INFO dispatchRunner [1626 MainThread] - Search process mode: preforked (reused process by new user) (build 1fff88043d5f). 02-09-2024 10:37:46.075 INFO dispatchRunner [1626 MainThread] - registering build time modules, count=1 02-09-2024 10:37:46.075 INFO dispatchRunner [1626 MainThread] - registering search time components of build time module name=vix 02-09-2024 10:37:46.076 INFO BundlesSetup [1626 MainThread] - Setup stats for /opt/splunk/etc: wallclock_elapsed_msec=7, cpu_time_used=0.00727909, shared_services_generation=2, shared_services_population=1 02-09-2024 10:37:46.080 INFO UserManagerPro [1626 MainThread] - Load authentication: forcing roles="admin, power, user" 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Setting user context: splunk-system-user 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Done setting user context: NULL -> splunk-system-user 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Unwound user context: splunk-system-user -> NULL 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Setting user context: admin 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Done setting user context: NULL -> admin 02-09-2024 10:37:46.080 INFO dispatchRunner [10446 RunDispatch] - search context: user="admin", app="Ta-Purplelab", bs-pathname="/opt/splunk/etc" 02-09-2024 10:37:46.080 INFO SearchParser [10446 RunDispatch] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.081 INFO dispatchRunner [10446 RunDispatch] - Search running in non-clustered mode 02-09-2024 10:37:46.081 INFO dispatchRunner [10446 RunDispatch] - SearchHeadInitSearchMs=0 02-09-2024 10:37:46.081 INFO dispatchRunner [10446 RunDispatch] - Executing the Search orchestrator and iterator model (dfs=false). 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10446 RunDispatch] - SearchOrchestrator is constructed. sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, eval_only=0 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10446 RunDispatch] - Initialized the SRI 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Initializing feature flags from config. feature_seed=2135385444 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=parallelreduce:enablePreview:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:search_retry:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:search_retry_realtime:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=parallelreduce:autoAppliedPercentage:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=subsearch:enableConcurrentPipelineProcessing:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=subsearch:concurrent_pipeline_adhoc:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=append:support_multiple_data_sources:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=join:support_multiple_data_sources:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search_optimization::set_required_fields:stats:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=searchresults:srs2:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:read_final_results_from_timeliner:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:fetch_remote_search_telemetry:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=testing:boolean_flag:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=testing:percent_flag:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=testing:legacy_flag:true 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10446 RunDispatch] - Search feature_flags={"v":1,"enabledFeatures":["parallelreduce:enablePreview","search:read_final_results_from_timeliner","search:fetch_remote_search_telemetry","testing:percent_flag","testing:legacy_flag"],"disabledFeatures":["search:search_retry","search:search_retry_realtime","parallelreduce:autoAppliedPercentage","subsearch:enableConcurrentPipelineProcessing","subsearch:concurrent_pipeline_adhoc","append:support_multiple_data_sources","join:support_multiple_data_sources","search_optimization::set_required_fields:stats","searchresults:srs2","testing:boolean_flag"]} 02-09-2024 10:37:46.081 INFO ISplunkDispatch [10446 RunDispatch] - Not running in splunkd. Bundle replication not triggered. 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10449 searchOrchestrator] - Initialzing the run time settings for the orchestrator. 02-09-2024 10:37:46.081 INFO UserManager [10449 searchOrchestrator] - Setting user context: admin 02-09-2024 10:37:46.081 INFO UserManager [10449 searchOrchestrator] - Done setting user context: NULL -> admin 02-09-2024 10:37:46.081 INFO AdaptiveSearchEngineSelector [10449 searchOrchestrator] - Search execution_plan=classic 02-09-2024 10:37:46.082 INFO SearchOrchestrator [10449 searchOrchestrator] - Creating the search DAG. 02-09-2024 10:37:46.082 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.082 INFO DispatchStorageManagerInfo [10449 searchOrchestrator] - Successfully created new dispatch directory for search job. sid=dc5edf3eebc8ccb6_tmp dispatch_dir=/opt/splunk/var/run/splunk/dispatch/dc5edf3eebc8ccb6_tmp 02-09-2024 10:37:46.082 INFO SearchParser [10449 searchOrchestrator] - PARSING: premakeresults 02-09-2024 10:37:46.082 INFO DispatchThread [10449 searchOrchestrator] - BatchMode: allowBatchMode: 1, conf(1): 1, timeline/Status buckets(0):0, realtime(0):0, report pipe empty(0):0, reqTimeOrder(0):0, summarize(0):0, statefulStreaming(0):0 02-09-2024 10:37:46.082 INFO DispatchThread [10449 searchOrchestrator] - required fields list to add to remote search = * 02-09-2024 10:37:46.082 INFO DispatchCommandProcessor [10449 searchOrchestrator] - summaryHash=f2df6493ea859e37 summaryId=A6ADAC30-27EC-4F28-BEB9-3BD2C7EC3E53_Ta-Purplelab_admin_f2df6493ea859e37 remoteSearch=premakeresults 02-09-2024 10:37:46.082 INFO DispatchCommandProcessor [10449 searchOrchestrator] - summaryHash=NSf2df6493ea859e37 summaryId=A6ADAC30-27EC-4F28-BEB9-3BD2C7EC3E53_Ta-Purplelab_admin_NSf2df6493ea859e37 remoteSearch=premakeresults 02-09-2024 10:37:46.082 INFO DispatchThread [10449 searchOrchestrator] - Getting summary ID for summaryHash=NSf2df6493ea859e37 02-09-2024 10:37:46.084 INFO DispatchThread [10449 searchOrchestrator] - Did not find a usable summary_id, setting info._summary_mode=none, not modifying input summary_id=A6ADAC30-27EC-4F28-BEB9-3BD2C7EC3E53_Ta-Purplelab_admin_NSf2df6493ea859e37 02-09-2024 10:37:46.085 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.085 INFO ChunkedExternProcessor [10449 searchOrchestrator] - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py 02-09-2024 10:37:46.155 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Failed attempting to parse transport header: Usage: python script.py <technique_id> 02-09-2024 10:37:46.161 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.161 INFO ScopedTimer [10449 searchOrchestrator] - search.optimize 0.076785640 02-09-2024 10:37:46.161 WARN SearchPhaseGenerator [10449 searchOrchestrator] - AST processing error, exception=31SearchProcessorMessageException, error=Error in 'mitrepurplelab' command: External search command exited unexpectedly.. Fall back to 2 phase. 02-09-2024 10:37:46.161 INFO SearchPhaseGenerator [10449 searchOrchestrator] - Executing two phase fallback for the search=| makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.161 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.162 INFO ChunkedExternProcessor [10449 searchOrchestrator] - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py 02-09-2024 10:37:46.232 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Failed attempting to parse transport header: Usage: python script.py <technique_id> 02-09-2024 10:37:46.239 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.239 ERROR SearchPhaseGenerator [10449 searchOrchestrator] - Fallback to two phase failed with SearchProcessorException: Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.239 WARN SearchPhaseGenerator [10449 searchOrchestrator] - Failed to create search phases: exception=31SearchProcessorMessageException, error=Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 INFO SearchStatusEnforcer [10449 searchOrchestrator] - sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, newState=BAD_INPUT_CANCEL, message=Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 ERROR SearchStatusEnforcer [10449 searchOrchestrator] - SearchMessage orig_component=ChunkedExternProcessor sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37 message_key=CHUNKED:UNEXPECTED_EXIT message=Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 INFO SearchStatusEnforcer [10449 searchOrchestrator] - State changed to BAD_INPUT_CANCEL: Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 INFO SearchStatusEnforcer [10449 searchOrchestrator] - Enforcing disk quota = 10485760000 02-09-2024 10:37:46.242 INFO DispatchManager [10449 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37', username='admin') 02-09-2024 10:37:46.242 INFO UserManager [10449 searchOrchestrator] - Unwound user context: admin -> NULL 02-09-2024 10:37:46.242 INFO SearchOrchestrator [10446 RunDispatch] - SearchOrchestrator is destructed. sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, eval_only=0 02-09-2024 10:37:46.242 INFO SearchStatusEnforcer [10446 RunDispatch] - SearchStatusEnforcer is already terminated 02-09-2024 10:37:46.242 INFO UserManager [10446 RunDispatch] - Unwound user context: admin -> NULL 02-09-2024 10:37:46.242 INFO LookupDataProvider [10446 RunDispatch] - Clearing out lookup shared provider map 02-09-2024 10:37:46.242 INFO dispatchRunner [1626 MainThread] - RunDispatch is done: sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, exit=0   the error seems to come from the fact that the argument went wrong:  02-09-2024 10:37:46.162 INFO ChunkedExternProcessor [10449 searchOrchestrator] - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py 02-09-2024 10:37:46.232 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Failed attempting to parse transport header: Usage: python script.py <technique_id> 02-09-2024 10:37:46.239 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Error in 'mitrepurplelab' command: External search command exited unexpectedly.   I don't understand why, because you can see that the argument is well transmitted to the custom command. and I can't retrieve the information about what is transmitted as an argument to the python script by the custom command   If you have any ideas, it would be a great help!
Hello Splunk Community I am currenlty testing Splunk universal forwarder to replace Logstash which use a lot a memory. I installed the latest UF 9.2.0.1 (and tried several other version: 9.1.1, 8... See more...
Hello Splunk Community I am currenlty testing Splunk universal forwarder to replace Logstash which use a lot a memory. I installed the latest UF 9.2.0.1 (and tried several other version: 9.1.1, 8.2.9) and everything is working fine, except that my container where splunk uf is running seems to memory leak: Even when I do not add any configuration to splunk uf  the memory of the container is increasing. my docker image uses a FROM image that I use in many other other container without any leak so I have no doubt splunk uf is at fault here So I wonder if anyone using splunk uf is experiencing the same issue ? or if there is an ongoing ticket tracking this memory leak? The memory leak is not huge but my container is supposed to run 24/7 so I cannot afford any leak. Thanks for your feedback.
 From this pannel i am going to remove the every lable under bars ex: "Mon Jar 15" to "Jar 15", i am not getting i checked in UI settings and Source code aswell, but nothing showingup. One thing ... See more...
 From this pannel i am going to remove the every lable under bars ex: "Mon Jar 15" to "Jar 15", i am not getting i checked in UI settings and Source code aswell, but nothing showingup. One thing is it possible to this requirment from dashboard
Hi  I want to create a search to find all the events for which last row exists but there is atleast 1 row missing. Example is attached below :  Splunk Query :  `macro_events_prod_gch_comms_esa`  ... See more...
Hi  I want to create a search to find all the events for which last row exists but there is atleast 1 row missing. Example is attached below :  Splunk Query :  `macro_events_prod_gch_comms_esa`  gch_messageType="Seev.047*" host="p*" gch_status="*" NOT"BCS" | table BO_PageNumber,BO_LastPage,gch_status |rename BO_PageNumber as PageNo , BO_LastPage as LastPage , gch_status as Status | sort by PageNo Requirement is find all the events for which LastPage as True exists and there is atleast 1 row missing with PageNo  less than the PageNo of row with  LastPage as True.          
How can I create Investigations in Splunk ES using REST APIs and not using Splunk WEB UI.
Hello, Why does Splunk drilldown open two tabs instead of one? Here's my setting: Image => Drilldown settings => On click link to custom URL => put URL When I clicked it opened two tabs If f... See more...
Hello, Why does Splunk drilldown open two tabs instead of one? Here's my setting: Image => Drilldown settings => On click link to custom URL => put URL When I clicked it opened two tabs If found old post, but nobody answered.   Please help.  Thanks https://community.splunk.com/t5/Dashboards-Visualizations/Why-is-Splunk-drilldown-setting-opening-two-tabs-instead-of-one/m-p/602444
Hi, We have Splunk running behind a load balancer so we reach it on the standard port 443. But on the backend it's using a different port, which the LB connects to, hence this port needs to stay se... See more...
Hi, We have Splunk running behind a load balancer so we reach it on the standard port 443. But on the backend it's using a different port, which the LB connects to, hence this port needs to stay set as the Web port. Problem is when we get alerts, Splunk still puts that port from the Web port setting in the url. So the url doesn't work and we need to manually edit it to remove the port. Is there no separate setting for this so that the actual listening port and the port it puts in the url can be controlled separately?   
Is there a way to add an interval setting to define the polling for a flat file? Not sure why it was requested but i was asked if it was possible and thought for sure it was only to find that it is c... See more...
Is there a way to add an interval setting to define the polling for a flat file? Not sure why it was requested but i was asked if it was possible and thought for sure it was only to find that it is currently not an option according to the inputs.conf section in the admin manual. https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf I read that the default polling may be 1ms, to collect the modified file in near real time. I offered an alternative to create a sh or ps to Get-Content or some other scripting language and then set an interval to read the flat file at their desired time, however i would have to duplicate all of the options available for a file monitor stanza such as crcsalt and whitelist blacklist within the script which would have to be code reviewed and go through a lengthy pipeline. ANy help would be appreciated to say if this is a definite no go or if it is a possible ehancement request to splunk for the next version. thank you    
Hi, When i want to extract the fields from JSON (below) destination,messages, inflightMessages. This the one of the latest event: { "analytics": [ { "destination": "billing.ev... See more...
Hi, When i want to extract the fields from JSON (below) destination,messages, inflightMessages. This the one of the latest event: { "analytics": [ { "destination": "billing.events.prod", "messages": 0, "inflightMessages": 0 }, { "destination": "billing.events.dev", "messages": 0, "inflightMessages": 0 }, { "destination": "hub.values.prod", "messages": 0, "inflightMessages": 0 }, { "destination": "hub.fifo-prod", "messages": 0, "inflightMessages": 0 } ] } This is the spl i am using: index=myindex sourcetype=mysourcetype | spath input=_raw | table analytics{}.destination, analytics{}.messages, analytics{}.inflightMessages   Where i am getting in the intrested fields  "analytics{}.destination" for this when i move curser to see values and count associated, for each value showing count 2, when you search for one event.   Why this is happening what is the issue? This data generally mulesoftmq.      
Hello - Admitted new guy here, I have a heavy forwarder sending data from a MySql database table into Splunk once a day.  Works great.  But now I want to send the data from a 'customer' type table ... See more...
Hello - Admitted new guy here, I have a heavy forwarder sending data from a MySql database table into Splunk once a day.  Works great.  But now I want to send the data from a 'customer' type table with about 200 rows, and I would like to replace the data every day, rather than append 200 new rows in the index every day. How is this best accomplished?  Tried searching, but I may not even be using the correct terminology.
Hi, I've got a problem with this playbook code block, the custom functions I try to execute seem to hang indefinitely, I also know the custom function works because I've successfully used it from a u... See more...
Hi, I've got a problem with this playbook code block, the custom functions I try to execute seem to hang indefinitely, I also know the custom function works because I've successfully used it from a utility block  I've tried a few different arrangements of this logic including initializing cfid with both the custom function calls and consolidating custom function names into a single while loop with the phantom.completed and have used pass instead of sleep. But the custom function doesn't seem to return/complete.  Here's another example, which is basically the same except it consolidates the while loops and executes both the custom functions at the same time. Once either of these above scenarios (or something similar) are successful I need to get the results from the custom function  executions (below pic), combine it into a single string and then send "data" to another function: > post_http_data(container=container, body=json.dumps({"text": data})    Any assistance would be great. Thanks.  
Have UFs configured on several Domain Controllers that point to a Heavy Forwarder and that points to Splunk Cloud. Trying to configure Windows Event Logs. Application, System & DNS logs are working c... See more...
Have UFs configured on several Domain Controllers that point to a Heavy Forwarder and that points to Splunk Cloud. Trying to configure Windows Event Logs. Application, System & DNS logs are working correctly, however, no Security logs for any of the DCs are working. Splunk service is running with a service account that has proper admin permissions. I have edited the DC GPO to allow the service account access to 'Manage auditing and security logs' I am at a lose here. Not sure what else to troubleshoot. Here is in inputs.conf file on each DC [WinEventLog://Application] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://Security] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://System] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://DNS Server] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog
Hello, I've read the following documentation: https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Backupindexeddata https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Backupconfigurat... See more...
Hello, I've read the following documentation: https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Backupindexeddata https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Backupconfigurations Basically to back up Splunk, I need to make a copy of "$SPLUNK_HOME/etc/*" and "$SPLUNK_HOME/var/lib/splunk/defaultdb/db/*" (after rotating the hot buckets.) My question is, how is this restored? Would I just paste the copied files back in to a working Splunk instance? Then the data can be searched normally? Thank you