All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk Wizards, I know there are plenty of people who've had similar issues, but I haven't been able to use their resolution for my issue.  I'm doing a search time field extraction to capture... See more...
Hello Splunk Wizards, I know there are plenty of people who've had similar issues, but I haven't been able to use their resolution for my issue.  I'm doing a search time field extraction to capture login username, which includes a backslash. I have the regex correct (?P<User_Name>(domain\\\\\\S+)) slightly modified from regex 101 for Splunk. In the field extraction wizard, it perfectly grabs all sample data (ex: domain\username).   (?P<User_Name>(domain\\\\\\S+))   However, this field doesn't show up in search when looking at the exact same sample data. I've performed a verbose search and made sure all available fields are showing, it's not there. I've tried using groups names I know Splunk isn't already using, no improvement.  Pretty sure it was to do with the backslash, because if I modify the regex to (?P<User_Name>domain\S+), the field extraction shows up in search, but it also contains data that isn't exactly correct.    (?P<User_Name>domain\S+)   I've tried variations with more and less backslashes, none seem to work.  I guess I can live with a sloppy field extraction if that's all I can do, but the first regex really is perfect.  Any ideas?
In a locked down environment where outbound traffic is explicit, what is the IP range or URL to facilitate the "splunk diag --upload" command?   Getting the following error: Unable to fetch case l... See more...
In a locked down environment where outbound traffic is explicit, what is the IP range or URL to facilitate the "splunk diag --upload" command?   Getting the following error: Unable to fetch case list: None Cannot validate upload options, aborting...
Is it possible to change the default search performance to "high_perf" in splunkcloud? On splunkcloud in the search bar you have the option of setting search performance to: standard_perf (search de... See more...
Is it possible to change the default search performance to "high_perf" in splunkcloud? On splunkcloud in the search bar you have the option of setting search performance to: standard_perf (search default), limited_perf, high_perf or Policy-Based Pool. I have begun using high_perf for my queries since otherwise things are WAY too slow. However it constantly changes me back to the default standard_perf. I cannot find the setting for this anywhere and have had no luck searching for documentation either.  
My teammate and I have been trying to summarize our environment to automatically build a data dictionary.  Our last feature was to add a lastSeen time to use as a rudimentary data integrity check.   ... See more...
My teammate and I have been trying to summarize our environment to automatically build a data dictionary.  Our last feature was to add a lastSeen time to use as a rudimentary data integrity check.   Recently this has stopped working on the _internal index.  As in tstats max time on _internal is a week ago, even though a straight SPL search on index=_internal returns results for today or any other arbitrary slice of time I query over the last week.  This suggests to me that the tsidx is messed up for _internal.   But to make matters more confusing, yesterday I was able to submit the same query and get a correct max(_time) for index=_internal.   Does anyone have an idea of what is going on with this behavior? Better yet, what I need to do to fix it? If it matters, this is a clustered search head environment and we also have quite a few indexers   usual results       | tstats count max(_time) as lastSeen where index=_* earliest=-20d@d latest=@m by index | convert ctime(lastSeen)       index count lastSeen _audit 999999999 10/22/2021 15:39:59 _internal 9999999 10/14/2021 20:09:35 _introspection 999999999 10/22/2021 15:39:59 _telemetry 999 10/22/2021 12:05:05
I want to use predicted values in my search and apply them to a time chart. What would be the best way to store these values for future use? I am thinking a summary index would be ideal but I am not ... See more...
I want to use predicted values in my search and apply them to a time chart. What would be the best way to store these values for future use? I am thinking a summary index would be ideal but I am not sure if there is a different way I might want to store it. I want the time chart to show some bounds as well based on the predicted values to help with analyzing what expected performance/traffic would be like.
I am looking for a way to automate the exporting of node exception data so that I can get it persisted into my Elastic stack for historical purposes as well as for telemetry data analysis? I'm abl... See more...
I am looking for a way to automate the exporting of node exception data so that I can get it persisted into my Elastic stack for historical purposes as well as for telemetry data analysis? I'm able to persist metric data that's easy through the Metric API but I'm not seeing anything similar for this area. TIA, Bill Youngman
Has anyone tried this?  
Hello, I have been asked to monitor our HTTP Event Forwarder.  Is there a Health Check in Splunk that would tell me the Forwarder status?  Or is there another way I could view if the Event Forwarder... See more...
Hello, I have been asked to monitor our HTTP Event Forwarder.  Is there a Health Check in Splunk that would tell me the Forwarder status?  Or is there another way I could view if the Event Forwarder is down without going into Splunk Enterprise?  Perhaps a URL that would simply give me an HTTP Status code or something.   Thanks, Tom
i have a field value with the following numbers = 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |12 i would like to do a "regex field=" to separate them after PIPE i would then like to perform a Cou... See more...
i have a field value with the following numbers = 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |12 i would like to do a "regex field=" to separate them after PIPE i would then like to perform a Count of how many separated fields there would be (in this case its 12) thank you.
HI All,    So i wrote this in attempt to reject all RFC1918  TO RFC1918 logs for windows event logs with WID 5156.  basically log anything external but not internal to internal communication.  The s... See more...
HI All,    So i wrote this in attempt to reject all RFC1918  TO RFC1918 logs for windows event logs with WID 5156.  basically log anything external but not internal to internal communication.  The sample log is a sniplet of what i am trying to drop.     Props.conf     [WinEventLog:Security] TRANSFORMS-sec = WinEventCode5156Drop,WinEventCodeSecDrop,WinEventCodeSecPass         Transforms.conf  (Is order of operations my issue here?)     [WinEventCode5156Drop] REGEX=((EventCode(?:\S+)5156)[\s\S]*(((((?:Source Address|Destination Address)(?:\S+))(?:\s)+10\.))|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+172\.1[6-9])|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+172\.2[0-9])|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+172\.3[0-1])|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+127\.0\.0\.1)|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+192\.168))[\s\S]*((((?:Source Address|Destination Address)(?:\S+))(?:\s)+10\.)|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+172\.1[6-9])|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+172\.2[0-9])|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+172\.3[0-1])|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+127\.0\.0\.1)|(((?:Source Address|Destination Address)(?:\S+))(?:\s)+192\.168))) DEST_KEY = queue FORMAT = nullQueue [WinEventCodeSecDrop] REGEX = . DEST_KEY = queue FORMAT = nullQueue [WinEventCodeSecPass] REGEX=(?:^EventCode=|<EventID>)(4618|4649|4719|4765|4766|4794|4897|4964|5124|550|1102|4621|4675|4692|4693|4706|4713|4714|4715|4716|4724|4727|4735|4737|4739|4754|4755|4764|4764|480|4816|4865|4866|4867|4868|4870|4882|4885|4890|4892|4896|4906|4907|4908|4912|4960|4961|4962|4963|4965|4976|4977|4978|4983|4984|5027|5028|5029|5030|5035|5037|5038|5120|5121|5122|5123|5376|5377|5453|5480|5483|5484|5485|6145|6273|6274|6275|6276|6277|6278|6279|6280|640|619|24586|24592|24593|2454|4608|4609|4610|4611|4612|4614|4615|4616|4622|4624|4625|4634|4646|4647|4648|4650|4651|4652|4653|4654|4655|4656|4657|4658|4659|4660|4661|4662|4663|4664|4665|4666|4667|4668|4670|4671|4672|4673|4674|4688|4689|4690|4691|4694|4695|4696|4697|4698|4699|4700|4701|4702|4704|4705|4707|4709|4710|4711|712|4717|4718|4720|4722|4723|4725|4726|4728|4729|4730|4731|4732|4733|4734|4738|4740|4741|4742|4743|4744|4745|4746|4747|4748|4749|4750|4751|4752|473|4756|4757|4758|4759|4760|4761|4762|4767|4768|4769|4770|4771|4772|4774|4775|4776|4777|4778|4779|4781|4782|4783|4784|4785|4786|4787|4788|4789|4790|4793|4800|4801|4802|4803|4864|4869|4871|4872|4873|4874|4875|4876|4877|4878|4879|4880|4881|4883|4884|4886|4887|4888|4889|4891|4893|4894|4895|4898|902|4904|4905|4909|4910|4928|4929|4930|4931|4932|4933|4934|4935|4936|4937|4944|4945|4946|4947|4948|4949|4950|4951|4952|4953|4954|4956|4957|4958|499|4980|4981|4982|4985|5024|5025|5031|5032|5033|5034|5039|5040|5041|5042|5043|5044|5045|5046|5047|5048|5050|5051|5056|5057|5058|5059|5060|5061|5062|5063|5064|5065|5066|5067|5068|5069|5070|5125|5126|5127|5136|5137|5138|5139|5140|5141|5152|5153|5154|5155|5156|5157|5158|5159|5378|5440|5441|5442|443|5444|5446|5447|5448|5449|5450|5451|5452|5456|5457|5458|5459|5460|5461|5462|5463|5464|5465|5466|5467|5468|5471|5472|5473|5474|5477|5479|5632|5633|5712|5888|5889|5890|608|6144|6272|561|563|625|613|614|615|616|24577|24578|24579|24580|24581|24582|24583|24584|24588|24595|24621|5049|5478) DEST_KEY = queue FORMAT = indexQueue       I can't figure out why this isn't working.       Sample Log     10/21/2021 10:06:05 AM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=5156 EventType=0 Type=Information ComputerName= (REDACTED BY ME THE POSTER) TaskCategory=Filtering Platform Connection OpCode=Info RecordNumber=7865970185 Keywords=Audit Success Message=The Windows Filtering Platform has permitted a connection. Application Information: Process ID: 1548 Application Name: \device\harddiskvolume4\windows\system32\dns.exe Network Information: Direction: Inbound Source Address: 10.10.211.7 Source Port: 53 Destination Address: 10.1.0.0 Destination Port: 57834 Protocol: 17 Filter Information: Filter Run-Time ID: 90427 Layer Name: Receive/Accept Layer Run-Time ID: 44      
Hi, I need to delete some KV Store Collections, and the only way I have to perform this kind of action is using the REST API, since I'm on Splunk Cloud. When I create KV Store Collections, I use th... See more...
Hi, I need to delete some KV Store Collections, and the only way I have to perform this kind of action is using the REST API, since I'm on Splunk Cloud. When I create KV Store Collections, I use this request, following the docs on Use the Splunk REST API to manage KV Store collections and data in Splunk Cloud Platform or Splunk Enterprise :    curl -k -u USER -d name=KV-COLLECTION-NAME https://HOSTNAME.splunkcloud.com:8089/servicesNS/nobody/APP_NAME/storage/collections/config   What I would like to know is how can I delete the KV Store Collection using the same approach.  Thanks
Hi Team, I am working in multisite cluster enviroment. we need to perform certificate renewals with 21 passphrase key. Same passphrase we are using in all server.conf file across the multisite clus... See more...
Hi Team, I am working in multisite cluster enviroment. we need to perform certificate renewals with 21 passphrase key. Same passphrase we are using in all server.conf file across the multisite cluster.  during cluster restart, we come across " Master node down" found in serverd.log file and unable to progress. during investigation found, [general] stanza passphrase is encrypted  valued of changeme is coming up instead of  encrypted value of 21 passphrase key and [clustering] stanza passphrase is encrypted value of 21 passphrase key is coming up. To make cluster to work as is, i think  we need to over write default password (i.e. changeme) in server.conf with custom password (i.e. 21 passphrase key) but currently not happening. We need to complete certificates renewals prior to 30/10/2021. This is its bit urgent. any help would be appreciated. Many Thanks Lalitha
How to use curl to overwrite host or query of an alert i was testing the below for example where i need to overwrite the SPL inside of a alert . Ideally i just want to overwrite the  host in the SPL... See more...
How to use curl to overwrite host or query of an alert i was testing the below for example where i need to overwrite the SPL inside of a alert . Ideally i just want to overwrite the  host in the SPL query and another variable . However it seems i need to overwrite the full query          curl -k -u dev_admin:devadmin https://localhost:8089/servicesNS/admin/lookup_editor/saved/searches/KPI_Alert_TEMPLATE -d cron_schedule="31 17 * * *" search="index=mlc_live | stats count(host) by host"          
Hi folks, A user in my company discovered that the pre-built list of Correlation-Searches in the filter on the Incident Review dashboard is incomplete. I can well retreive the correlation searches ... See more...
Hi folks, A user in my company discovered that the pre-built list of Correlation-Searches in the filter on the Incident Review dashboard is incomplete. I can well retreive the correlation searches in Content view and in Alerts views, and they have triggeresd notables. I tried to find out the search ran to populated it but my skill in html/js are not enough. Any idea ?   Thanks!  
Hi all, I’m just about to upgrade our Phantom / Splunk SOAR version to 5.0.1. The Version Compatibility matrix in the documentation for the Phantom Remote Search app suggests that this version isn’t... See more...
Hi all, I’m just about to upgrade our Phantom / Splunk SOAR version to 5.0.1. The Version Compatibility matrix in the documentation for the Phantom Remote Search app suggests that this version isn’t supported though (  https://docs.splunk.com/Documentation/PhantomRemoteSearch/1.0.17/PhantomRemoteSearch/Abouttheapp ) I’m sure that it is compatible but could someone please confirm before I upgrade my Production Phantom platform. Also, just an observation…. 14 indexes!!!! Would it not be more in keeping with general recommendations / strategy to have 1 index (or more for multiple Phantom instances) and have multiple sourcetypes? many thanks, Mark
I am in the process of integrating AppDynamics with our build tool. I am going to add a step in the build to create an APPLICATION_DEPLOYMENT event whenever there is a code deployment to the producti... See more...
I am in the process of integrating AppDynamics with our build tool. I am going to add a step in the build to create an APPLICATION_DEPLOYMENT event whenever there is a code deployment to the production server. I have experimented with the event generation API and have it working. The API documentation states that events have a lifespan of two weeks unless the event is archived. Is there a way to have an event automatically archived as we want to have deployment events last for a year? Is there another way to capture what we need? Dale Chapman
Invalid key in stanza [workday://user_activity] in /opt/splunk/etc/apps/TA-workday/local/inputs.conf, line 2: include_target (value: 0). [workday://user_activity] include_target = 0 index = workda... See more...
Invalid key in stanza [workday://user_activity] in /opt/splunk/etc/apps/TA-workday/local/inputs.conf, line 2: include_target (value: 0). [workday://user_activity] include_target = 0 index = workday input_name = user_activity interval = 300 include_target_details = 0 Need some help with this one trying to ingest logs from my Workday TA, logs stopped reporting.
Hi I need to use a post process search for displaying a timechart Here is my id configuration   <search id="test"> <query>index=tutu sourcetype="ica" $source$ $type$ $domain$ $site$ $ezconf... See more...
Hi I need to use a post process search for displaying a timechart Here is my id configuration   <search id="test"> <query>index=tutu sourcetype="ica" $source$ $type$ $domain$ $site$ $ezconf$ | fields ica_latency_last_recorded ica_latency_session_avg idle_sec site host</query> <earliest>-7d@h</earliest> <latest>now</latest> </search>   and here is base configuration   <search base="test"> <query> | search idle_sec &lt; 300 | timechart span=1d avg(ica_latency_session_avg) as "Latence moyenne de la session (ms)"</query> </search>   as you can see my timechart is on the last 7 days but any values are retuned what is wrong please?
I have a python script which makes an API call and get the events . Number of events, its collecting are correct however its adding duplicate entries per field. Can you please assist, I am ? Here is... See more...
I have a python script which makes an API call and get the events . Number of events, its collecting are correct however its adding duplicate entries per field. Can you please assist, I am ? Here is my script response = helper.send_http_request(rest_url, 'GET' ,parameters=queryParam, payload=None,headers=headers, cookies=None,verify=False, cert=None, timeout=None, use_proxy=False) r_headers = response.headers r_json = response.json() r_status = response.status_code if r_status !=200:     response.raise_for_status() final_result = [] for _file in r_json:     responseStr=''     fileid = str(_file["fileid"])     state = helper.get_check_point(str(fileid))     if state is None:         final_result.append(_file)         helper.save_check_point(str(str(fileid)), "Indexed") event=helper.new_event(json.dumps(final_result), time=None, host=None, index=None, source=None, sourcetype=None, done=True, unbroken=True) ew.write_event(event) response: [ { "fileid": "abc.txt", "source": "source1", "destination": "dest1", "servername": "server1", }, { "fileid": "xyz.txt", "source": "source2", "destination": "dest2", "servername": "server2", } ] Response after collecting data to Index looks as below: fileid source destination servername "abc.txt abc.txt source1 source1 dest1 dest1 server1 server1 xyz.txt xyz.txt source2 source2 dest2 dest2 server2 server2
Starting our journey into Splunk and need some help. I am trying to send and alert when a new version of antivirus is installed on our machines. I am monitoring the application windows event log, so... See more...
Starting our journey into Splunk and need some help. I am trying to send and alert when a new version of antivirus is installed on our machines. I am monitoring the application windows event log, so it would be something like grab the version from 20 minutes ago in the logs and if different than current version send the alert. "Message=Windows Installer installed the product. Product Name: Antivirus Software. Product Version: 1.0.0.000.1. Product Language: 001. Manufacturer: Antivirus. Installation success or error status: 0" Any ideas on how to start this search?