All Topics

Top

All Topics

Hello,  I created a dashbord with a text input, the token is then passed to a panel that executes this command: <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(techn... See more...
Hello,  I created a dashbord with a text input, the token is then passed to a panel that executes this command: <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab $technique_id$</query> the purpose of this command is to trigger a custom command with this config: [mitrepurplelab] filename = mitrepurplelab.py enableheader = true outputheader = true requires_srinfo = true chunked = true streaming = true   the mitrepurplelab.py script is then triggered, here is its code: import sys import requests import logging logging.basicConfig(filename='mitrepurplelab.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') def main(): logging.debug(f "Arguments received: {sys.argv}") if len(sys.argv) != 2: logging.error("Incorrect usage: python script.py <technique_id>") print("Usage: python script.py <technique_id>") return technique_id = sys.argv[1] url = "http://192.168.142.146:5000/api/mitre_attack_execution" # Make sure your JWT token is complete and correctly formatted token = "token headers = { "Authorization": f "Bearer {token}" } params = { "technique_id": technique_id } response = requests.post(url, headers=headers, params=params) if response.status_code == 200: print("Request successful!") print("Server response:") print(response.json()) else: logging.error(f "Error: {response.status_code}, Response body: {response.text}") print(f "Error: {response.status_code}, Response body: {response.text}") if __name__ == "__main__": main()   the script works well when run by hand, for example : python3 bin/mitrepurplelab.py T1059.003 but when I execute it via the dashboard I get this error: in the panel search.log I get this:   02-09-2024 10:37:46.075 INFO dispatchRunner [1626 MainThread] - Search process mode: preforked (reused process by new user) (build 1fff88043d5f). 02-09-2024 10:37:46.075 INFO dispatchRunner [1626 MainThread] - registering build time modules, count=1 02-09-2024 10:37:46.075 INFO dispatchRunner [1626 MainThread] - registering search time components of build time module name=vix 02-09-2024 10:37:46.076 INFO BundlesSetup [1626 MainThread] - Setup stats for /opt/splunk/etc: wallclock_elapsed_msec=7, cpu_time_used=0.00727909, shared_services_generation=2, shared_services_population=1 02-09-2024 10:37:46.080 INFO UserManagerPro [1626 MainThread] - Load authentication: forcing roles="admin, power, user" 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Setting user context: splunk-system-user 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Done setting user context: NULL -> splunk-system-user 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Unwound user context: splunk-system-user -> NULL 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Setting user context: admin 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Done setting user context: NULL -> admin 02-09-2024 10:37:46.080 INFO dispatchRunner [10446 RunDispatch] - search context: user="admin", app="Ta-Purplelab", bs-pathname="/opt/splunk/etc" 02-09-2024 10:37:46.080 INFO SearchParser [10446 RunDispatch] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.081 INFO dispatchRunner [10446 RunDispatch] - Search running in non-clustered mode 02-09-2024 10:37:46.081 INFO dispatchRunner [10446 RunDispatch] - SearchHeadInitSearchMs=0 02-09-2024 10:37:46.081 INFO dispatchRunner [10446 RunDispatch] - Executing the Search orchestrator and iterator model (dfs=false). 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10446 RunDispatch] - SearchOrchestrator is constructed. sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, eval_only=0 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10446 RunDispatch] - Initialized the SRI 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Initializing feature flags from config. feature_seed=2135385444 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=parallelreduce:enablePreview:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:search_retry:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:search_retry_realtime:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=parallelreduce:autoAppliedPercentage:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=subsearch:enableConcurrentPipelineProcessing:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=subsearch:concurrent_pipeline_adhoc:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=append:support_multiple_data_sources:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=join:support_multiple_data_sources:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search_optimization::set_required_fields:stats:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=searchresults:srs2:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:read_final_results_from_timeliner:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:fetch_remote_search_telemetry:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=testing:boolean_flag:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=testing:percent_flag:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=testing:legacy_flag:true 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10446 RunDispatch] - Search feature_flags={"v":1,"enabledFeatures":["parallelreduce:enablePreview","search:read_final_results_from_timeliner","search:fetch_remote_search_telemetry","testing:percent_flag","testing:legacy_flag"],"disabledFeatures":["search:search_retry","search:search_retry_realtime","parallelreduce:autoAppliedPercentage","subsearch:enableConcurrentPipelineProcessing","subsearch:concurrent_pipeline_adhoc","append:support_multiple_data_sources","join:support_multiple_data_sources","search_optimization::set_required_fields:stats","searchresults:srs2","testing:boolean_flag"]} 02-09-2024 10:37:46.081 INFO ISplunkDispatch [10446 RunDispatch] - Not running in splunkd. Bundle replication not triggered. 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10449 searchOrchestrator] - Initialzing the run time settings for the orchestrator. 02-09-2024 10:37:46.081 INFO UserManager [10449 searchOrchestrator] - Setting user context: admin 02-09-2024 10:37:46.081 INFO UserManager [10449 searchOrchestrator] - Done setting user context: NULL -> admin 02-09-2024 10:37:46.081 INFO AdaptiveSearchEngineSelector [10449 searchOrchestrator] - Search execution_plan=classic 02-09-2024 10:37:46.082 INFO SearchOrchestrator [10449 searchOrchestrator] - Creating the search DAG. 02-09-2024 10:37:46.082 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.082 INFO DispatchStorageManagerInfo [10449 searchOrchestrator] - Successfully created new dispatch directory for search job. sid=dc5edf3eebc8ccb6_tmp dispatch_dir=/opt/splunk/var/run/splunk/dispatch/dc5edf3eebc8ccb6_tmp 02-09-2024 10:37:46.082 INFO SearchParser [10449 searchOrchestrator] - PARSING: premakeresults 02-09-2024 10:37:46.082 INFO DispatchThread [10449 searchOrchestrator] - BatchMode: allowBatchMode: 1, conf(1): 1, timeline/Status buckets(0):0, realtime(0):0, report pipe empty(0):0, reqTimeOrder(0):0, summarize(0):0, statefulStreaming(0):0 02-09-2024 10:37:46.082 INFO DispatchThread [10449 searchOrchestrator] - required fields list to add to remote search = * 02-09-2024 10:37:46.082 INFO DispatchCommandProcessor [10449 searchOrchestrator] - summaryHash=f2df6493ea859e37 summaryId=A6ADAC30-27EC-4F28-BEB9-3BD2C7EC3E53_Ta-Purplelab_admin_f2df6493ea859e37 remoteSearch=premakeresults 02-09-2024 10:37:46.082 INFO DispatchCommandProcessor [10449 searchOrchestrator] - summaryHash=NSf2df6493ea859e37 summaryId=A6ADAC30-27EC-4F28-BEB9-3BD2C7EC3E53_Ta-Purplelab_admin_NSf2df6493ea859e37 remoteSearch=premakeresults 02-09-2024 10:37:46.082 INFO DispatchThread [10449 searchOrchestrator] - Getting summary ID for summaryHash=NSf2df6493ea859e37 02-09-2024 10:37:46.084 INFO DispatchThread [10449 searchOrchestrator] - Did not find a usable summary_id, setting info._summary_mode=none, not modifying input summary_id=A6ADAC30-27EC-4F28-BEB9-3BD2C7EC3E53_Ta-Purplelab_admin_NSf2df6493ea859e37 02-09-2024 10:37:46.085 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.085 INFO ChunkedExternProcessor [10449 searchOrchestrator] - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py 02-09-2024 10:37:46.155 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Failed attempting to parse transport header: Usage: python script.py <technique_id> 02-09-2024 10:37:46.161 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.161 INFO ScopedTimer [10449 searchOrchestrator] - search.optimize 0.076785640 02-09-2024 10:37:46.161 WARN SearchPhaseGenerator [10449 searchOrchestrator] - AST processing error, exception=31SearchProcessorMessageException, error=Error in 'mitrepurplelab' command: External search command exited unexpectedly.. Fall back to 2 phase. 02-09-2024 10:37:46.161 INFO SearchPhaseGenerator [10449 searchOrchestrator] - Executing two phase fallback for the search=| makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.161 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.162 INFO ChunkedExternProcessor [10449 searchOrchestrator] - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py 02-09-2024 10:37:46.232 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Failed attempting to parse transport header: Usage: python script.py <technique_id> 02-09-2024 10:37:46.239 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.239 ERROR SearchPhaseGenerator [10449 searchOrchestrator] - Fallback to two phase failed with SearchProcessorException: Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.239 WARN SearchPhaseGenerator [10449 searchOrchestrator] - Failed to create search phases: exception=31SearchProcessorMessageException, error=Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 INFO SearchStatusEnforcer [10449 searchOrchestrator] - sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, newState=BAD_INPUT_CANCEL, message=Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 ERROR SearchStatusEnforcer [10449 searchOrchestrator] - SearchMessage orig_component=ChunkedExternProcessor sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37 message_key=CHUNKED:UNEXPECTED_EXIT message=Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 INFO SearchStatusEnforcer [10449 searchOrchestrator] - State changed to BAD_INPUT_CANCEL: Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 INFO SearchStatusEnforcer [10449 searchOrchestrator] - Enforcing disk quota = 10485760000 02-09-2024 10:37:46.242 INFO DispatchManager [10449 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37', username='admin') 02-09-2024 10:37:46.242 INFO UserManager [10449 searchOrchestrator] - Unwound user context: admin -> NULL 02-09-2024 10:37:46.242 INFO SearchOrchestrator [10446 RunDispatch] - SearchOrchestrator is destructed. sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, eval_only=0 02-09-2024 10:37:46.242 INFO SearchStatusEnforcer [10446 RunDispatch] - SearchStatusEnforcer is already terminated 02-09-2024 10:37:46.242 INFO UserManager [10446 RunDispatch] - Unwound user context: admin -> NULL 02-09-2024 10:37:46.242 INFO LookupDataProvider [10446 RunDispatch] - Clearing out lookup shared provider map 02-09-2024 10:37:46.242 INFO dispatchRunner [1626 MainThread] - RunDispatch is done: sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, exit=0   the error seems to come from the fact that the argument went wrong:  02-09-2024 10:37:46.162 INFO ChunkedExternProcessor [10449 searchOrchestrator] - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py 02-09-2024 10:37:46.232 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Failed attempting to parse transport header: Usage: python script.py <technique_id> 02-09-2024 10:37:46.239 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Error in 'mitrepurplelab' command: External search command exited unexpectedly.   I don't understand why, because you can see that the argument is well transmitted to the custom command. and I can't retrieve the information about what is transmitted as an argument to the python script by the custom command   If you have any ideas, it would be a great help!
Hello Splunk Community I am currenlty testing Splunk universal forwarder to replace Logstash which use a lot a memory. I installed the latest UF 9.2.0.1 (and tried several other version: 9.1.1, 8... See more...
Hello Splunk Community I am currenlty testing Splunk universal forwarder to replace Logstash which use a lot a memory. I installed the latest UF 9.2.0.1 (and tried several other version: 9.1.1, 8.2.9) and everything is working fine, except that my container where splunk uf is running seems to memory leak: Even when I do not add any configuration to splunk uf  the memory of the container is increasing. my docker image uses a FROM image that I use in many other other container without any leak so I have no doubt splunk uf is at fault here So I wonder if anyone using splunk uf is experiencing the same issue ? or if there is an ongoing ticket tracking this memory leak? The memory leak is not huge but my container is supposed to run 24/7 so I cannot afford any leak. Thanks for your feedback.
 From this pannel i am going to remove the every lable under bars ex: "Mon Jar 15" to "Jar 15", i am not getting i checked in UI settings and Source code aswell, but nothing showingup. One thing ... See more...
 From this pannel i am going to remove the every lable under bars ex: "Mon Jar 15" to "Jar 15", i am not getting i checked in UI settings and Source code aswell, but nothing showingup. One thing is it possible to this requirment from dashboard
Hi  I want to create a search to find all the events for which last row exists but there is atleast 1 row missing. Example is attached below :  Splunk Query :  `macro_events_prod_gch_comms_esa`  ... See more...
Hi  I want to create a search to find all the events for which last row exists but there is atleast 1 row missing. Example is attached below :  Splunk Query :  `macro_events_prod_gch_comms_esa`  gch_messageType="Seev.047*" host="p*" gch_status="*" NOT"BCS" | table BO_PageNumber,BO_LastPage,gch_status |rename BO_PageNumber as PageNo , BO_LastPage as LastPage , gch_status as Status | sort by PageNo Requirement is find all the events for which LastPage as True exists and there is atleast 1 row missing with PageNo  less than the PageNo of row with  LastPage as True.          
How can I create Investigations in Splunk ES using REST APIs and not using Splunk WEB UI.
Hello, Why does Splunk drilldown open two tabs instead of one? Here's my setting: Image => Drilldown settings => On click link to custom URL => put URL When I clicked it opened two tabs If f... See more...
Hello, Why does Splunk drilldown open two tabs instead of one? Here's my setting: Image => Drilldown settings => On click link to custom URL => put URL When I clicked it opened two tabs If found old post, but nobody answered.   Please help.  Thanks https://community.splunk.com/t5/Dashboards-Visualizations/Why-is-Splunk-drilldown-setting-opening-two-tabs-instead-of-one/m-p/602444
Hi, We have Splunk running behind a load balancer so we reach it on the standard port 443. But on the backend it's using a different port, which the LB connects to, hence this port needs to stay se... See more...
Hi, We have Splunk running behind a load balancer so we reach it on the standard port 443. But on the backend it's using a different port, which the LB connects to, hence this port needs to stay set as the Web port. Problem is when we get alerts, Splunk still puts that port from the Web port setting in the url. So the url doesn't work and we need to manually edit it to remove the port. Is there no separate setting for this so that the actual listening port and the port it puts in the url can be controlled separately?   
Is there a way to add an interval setting to define the polling for a flat file? Not sure why it was requested but i was asked if it was possible and thought for sure it was only to find that it is c... See more...
Is there a way to add an interval setting to define the polling for a flat file? Not sure why it was requested but i was asked if it was possible and thought for sure it was only to find that it is currently not an option according to the inputs.conf section in the admin manual. https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf I read that the default polling may be 1ms, to collect the modified file in near real time. I offered an alternative to create a sh or ps to Get-Content or some other scripting language and then set an interval to read the flat file at their desired time, however i would have to duplicate all of the options available for a file monitor stanza such as crcsalt and whitelist blacklist within the script which would have to be code reviewed and go through a lengthy pipeline. ANy help would be appreciated to say if this is a definite no go or if it is a possible ehancement request to splunk for the next version. thank you    
It’s hard to read a headline today without seeing the acronym, AI. In fact, Predictions 2024, the annual report from Splunk senior leadership is heavily focused on AI's omnipresence – underscoring it... See more...
It’s hard to read a headline today without seeing the acronym, AI. In fact, Predictions 2024, the annual report from Splunk senior leadership is heavily focused on AI's omnipresence – underscoring its inevitable influence on cybersecurity dynamics.  But, don’t worry, you’re not traversing this AI-dominated landscape alone. Splunk Education continues to create learning paths and training curriculum to support digital resilience in the world of cybersecurity. In fact, our new Splunk Education e-book is designed to guide your cybersecurity learning journey – with coursework and programs that hone in on what’s needed to identify and remediate new, AI-driven attacks and exploitations.   Here are just a few highlights from the new e-book:  Cybersecurity Training Programs: Splunk Education is growing its cybersecurity-centric curriculum, combining instructor-led courses with self-paced, hands-on learning labs. The curriculum and strategy support the demand for new skills as cybersecurity professionals continue to be challenged by more sophisticated cyberattacks and strong compliance frameworks and governance.  Cybersecurity Certification Tracks: Acknowledging the necessity of skill validation – along with the need to have an even more expansive and integrated view of data – Splunk Education provides cost-effective certifications in cybersecurity as a way to validate skills and recognize industry experts. Free, Self-Paced Learning Opportunities: Committed to accessible education, Splunk offers free online cybersecurity courses based on expertise and interest levels. Additionally, Splunk Lantern remains an always-available, free resource to help users see what’s possible with their use cases and maximize Splunk's potential. Partnerships Academic Institutions: Through our Academic Alliance Program, we’re expanding our reach in non-profit universities, colleges, and schools – nurturing the next wave of cyber experts. This post-secondary program offers 21 free, self-paced courses with practical labs, faculty training and support, custom programming, and more.  Authorized Learning Partners: Splunk also helps learners experience Splunk Education Training and Certification locally through the Splunk Authorized Learning Partner (ALP) program – offering courses in learner language, timezone, and location.   Prepare for a new year of transformation. Use our Splunk Education e-book as your compass to navigate a digital age predicted to be fueled by AI-driven cyber threats.    -Callie Skokos on behalf of the Splunk Education Crew
Hi, When i want to extract the fields from JSON (below) destination,messages, inflightMessages. This the one of the latest event: { "analytics": [ { "destination": "billing.ev... See more...
Hi, When i want to extract the fields from JSON (below) destination,messages, inflightMessages. This the one of the latest event: { "analytics": [ { "destination": "billing.events.prod", "messages": 0, "inflightMessages": 0 }, { "destination": "billing.events.dev", "messages": 0, "inflightMessages": 0 }, { "destination": "hub.values.prod", "messages": 0, "inflightMessages": 0 }, { "destination": "hub.fifo-prod", "messages": 0, "inflightMessages": 0 } ] } This is the spl i am using: index=myindex sourcetype=mysourcetype | spath input=_raw | table analytics{}.destination, analytics{}.messages, analytics{}.inflightMessages   Where i am getting in the intrested fields  "analytics{}.destination" for this when i move curser to see values and count associated, for each value showing count 2, when you search for one event.   Why this is happening what is the issue? This data generally mulesoftmq.      
Hello - Admitted new guy here, I have a heavy forwarder sending data from a MySql database table into Splunk once a day.  Works great.  But now I want to send the data from a 'customer' type table ... See more...
Hello - Admitted new guy here, I have a heavy forwarder sending data from a MySql database table into Splunk once a day.  Works great.  But now I want to send the data from a 'customer' type table with about 200 rows, and I would like to replace the data every day, rather than append 200 new rows in the index every day. How is this best accomplished?  Tried searching, but I may not even be using the correct terminology.
Hi, I've got a problem with this playbook code block, the custom functions I try to execute seem to hang indefinitely, I also know the custom function works because I've successfully used it from a u... See more...
Hi, I've got a problem with this playbook code block, the custom functions I try to execute seem to hang indefinitely, I also know the custom function works because I've successfully used it from a utility block  I've tried a few different arrangements of this logic including initializing cfid with both the custom function calls and consolidating custom function names into a single while loop with the phantom.completed and have used pass instead of sleep. But the custom function doesn't seem to return/complete.  Here's another example, which is basically the same except it consolidates the while loops and executes both the custom functions at the same time. Once either of these above scenarios (or something similar) are successful I need to get the results from the custom function  executions (below pic), combine it into a single string and then send "data" to another function: > post_http_data(container=container, body=json.dumps({"text": data})    Any assistance would be great. Thanks.  
Have UFs configured on several Domain Controllers that point to a Heavy Forwarder and that points to Splunk Cloud. Trying to configure Windows Event Logs. Application, System & DNS logs are working c... See more...
Have UFs configured on several Domain Controllers that point to a Heavy Forwarder and that points to Splunk Cloud. Trying to configure Windows Event Logs. Application, System & DNS logs are working correctly, however, no Security logs for any of the DCs are working. Splunk service is running with a service account that has proper admin permissions. I have edited the DC GPO to allow the service account access to 'Manage auditing and security logs' I am at a lose here. Not sure what else to troubleshoot. Here is in inputs.conf file on each DC [WinEventLog://Application] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://Security] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://System] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://DNS Server] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog
Hello, I've read the following documentation: https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Backupindexeddata https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Backupconfigurat... See more...
Hello, I've read the following documentation: https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Backupindexeddata https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Backupconfigurations Basically to back up Splunk, I need to make a copy of "$SPLUNK_HOME/etc/*" and "$SPLUNK_HOME/var/lib/splunk/defaultdb/db/*" (after rotating the hot buckets.) My question is, how is this restored? Would I just paste the copied files back in to a working Splunk instance? Then the data can be searched normally? Thank you
In this blog post, we will look at how to create Synthetics uptime HTTP test in bulk using a CSV file with Splunk Observability Synthetics terraform provider. This code requires Terraform version 0... See more...
In this blog post, we will look at how to create Synthetics uptime HTTP test in bulk using a CSV file with Splunk Observability Synthetics terraform provider. This code requires Terraform version 0.13+. Create a CSV file with details of uptime tests such as Test name, URL, SSL validation, frequency, locations etc. as following:   name,url,ssl_validation,frequency,scheduling_strategy,active,user_agent,locations Google test,https://www.google.com,true,5,round_robin,true,Splunk (Default),"aws-us-east-1,aws-us-west-1" Amazon test,https://www.amazon.com,true,5,concurrent,true,Splunk (Default),"aws-us-east-1"   The above example CSV file has following columns, please update the data in each columns as per your requirements: Column Name Description name Uptime test name url Uptime test URL ssl_validation Set to true if you want to enable TLS/SSL validation, otherwise set to false frequency Specify test frequency number (in minutes) scheduling_strategy Set round_robin if you want to enable Round-robin, otherwise set concurrent active Set to true if you want to enable test, otherwise set to false user_agent Provide the User agent string you want to use, otherwise set Splunk (Default) locations Provide the list of locations in double quotes separated by comma, such as "aws-us-east-1,aws-us-west-1" or "aws-us-east-1". Here's the list of available Splunk Synthetics public locations. Create main.tf terraform file as following in the same folder where your CSV file is located:   terraform { required_providers { synthetics = { version = "2.0.1" source = "splunk/synthetics" } } } provider "synthetics" { product = "observability" realm = "REPLACE_REALM" apikey = "REPLACE_WITH_API_TOKEN" } locals { http_test_data = csvdecode(file("${path.module}/<REPLACE_WITH_CSV_FILE_NAME>.csv")) } resource "synthetics_create_http_check_v2" "o11y_http_check" { for_each = { for test_data in local.http_test_data : test_data.name => test_data } test { active = each.value.active frequency = each.value.frequency location_ids = split(",",each.value.locations) name = each.value.name type = "http" url = each.value.url scheduling_strategy = each.value.scheduling_strategy request_method = "GET" verify_certificates = each.value.ssl_validation user_agent = each.value.user_agent } }   You can modify the above terraform code as per your requirement, and data provided in CSV file. Please note that, you will have to replace Splunk Observability realm, Splunk Observability API token and CSV file name. Once terraform and CSV files are ready, run following commands to create HTTP uptime tests from CSV file: terraform init - to initialize terraform provider terraform plan - verify there are no errors terraform apply - Type yes as confirmation Voila! You have now created uptime tests using CSV data. If you want to delete existing uptime tests created using terraform automation, please run the following command: terraform destroy - Type yes as confirmation
Hi Team, I am trying to create a dashboard pie chart visualization with the spl query. we have total_apps are 300, how many apps(count) are there out of this. Note: why i am using "dc" here we h... See more...
Hi Team, I am trying to create a dashboard pie chart visualization with the spl query. we have total_apps are 300, how many apps(count) are there out of this. Note: why i am using "dc" here we have foo_foo_1, foo_foo_2, foo_foo_3 apps. |rest /services/data/indexes |rename title as index |rex field=index "^foo_(?<appname>.+)" |rex field=index "^foo_(?<appname>.+)_" |table appname, index |stats dc(appname) as currentapps |eval currentapps = currentapps |eval total_apps = 300 from this in pie chart its showing only total_apps or currentapps not both in single pie chart so what is the issue. 
Hi Community, I have to upgrade the SAP agents and I would like to know if HTTP SDK instances are running directly on SAP application servers or if HTTP SDK instances and SDK manager are running on... See more...
Hi Community, I have to upgrade the SAP agents and I would like to know if HTTP SDK instances are running directly on SAP application servers or if HTTP SDK instances and SDK manager are running on separate Linux machines. How do I identify it? Thanks
Hi, I have a customer who has a 50GB Enterprise license on one network and he wants to add SIEM, but only for a separate network which has a measly 5GB of daily volume. He understandably feels very ... See more...
Hi, I have a customer who has a 50GB Enterprise license on one network and he wants to add SIEM, but only for a separate network which has a measly 5GB of daily volume. He understandably feels very strongly about being forced to purchase an equivalent 50GB SIEM license when all he needs is 5GB and its even on a completely separate network. Is it possible to have a separate Enterprise + SIEM license for a second network on the same site? I heard claims that is illegal as far as Splunk is concerned, is there a basis to those claims? Thanks in advance for your responses.
Hi,   I have ingested an csv file by creating an input on a windows server. But the challenge is the logs are not getting extracted as fields  I want the data to be extracted in fields. Can someo... See more...
Hi,   I have ingested an csv file by creating an input on a windows server. But the challenge is the logs are not getting extracted as fields  I want the data to be extracted in fields. Can someone please help me in extracting all the fields from the log.   Thank you
It there any best way to find if an index used in any of the saved searches, alerts, reports and dashboard