All Topics

Top

All Topics

I have two lookups. One lookup has Hostnames and IPs and the other has hostnames. I would like to run a search so I can take second lookup and see if its hostname is in the first, and if it is, show ... See more...
I have two lookups. One lookup has Hostnames and IPs and the other has hostnames. I would like to run a search so I can take second lookup and see if its hostname is in the first, and if it is, show the hostname and the IP in the search results. 
I have a lookup of hosts with a field Last_Scan_Datetime and the field values were formated using strftime(_time, "%Y-%m-%d-%H.%M.%S") . How would I go upon searching for hosts that were scanned in t... See more...
I have a lookup of hosts with a field Last_Scan_Datetime and the field values were formated using strftime(_time, "%Y-%m-%d-%H.%M.%S") . How would I go upon searching for hosts that were scanned in the last 3 days?  Everything I've found regarding searching with time has involved searching the index. 
Hello,  I downloaded a 60 day free trial version of Splunk Enterprise on Friday to use as part of training class.  Things worked well on Friday.  But I cannot login to the account this morning. A... See more...
Hello,  I downloaded a 60 day free trial version of Splunk Enterprise on Friday to use as part of training class.  Things worked well on Friday.  But I cannot login to the account this morning. Any help would be appreciated. Thank you, Cynthia
Hello, If I have a detector in Splunk Observability posting notifications to Slack, is it possible to configure it in order to send a bunch of similar alerts (e.g., 2-3 pods in the same Kubernetes ... See more...
Hello, If I have a detector in Splunk Observability posting notifications to Slack, is it possible to configure it in order to send a bunch of similar alerts (e.g., 2-3 pods in the same Kubernetes namespace aren't ready) as a single message, not one message per each pod?
Hi, I am able to run normal search using rest API using below syntax:   https://SearchHead_host:8089/servicesNS/admin/search/search/jobs/export -d search=“search index%3D_internal | head 3” -d out... See more...
Hi, I am able to run normal search using rest API using below syntax:   https://SearchHead_host:8089/servicesNS/admin/search/search/jobs/export -d search=“search index%3D_internal | head 3” -d output_mode=csv/xml/json   But when I use a "MULTISEARCH" query, the search doesn't show any results. Does anyone know how to overcome this? Thanks, Arun
Hi, we have Dev and Prod linux servers which contains Splunk agents. The infraestructure on Prod  (V 8.2.2.1) contains: - Heavy Forwarder - 3 Indexes  - Search Head - DS, LM, MC, SHCD Agent... See more...
Hi, we have Dev and Prod linux servers which contains Splunk agents. The infraestructure on Prod  (V 8.2.2.1) contains: - Heavy Forwarder - 3 Indexes  - Search Head - DS, LM, MC, SHCD Agent   Dev infraestructure (V 8.0.1) contains: - Search Head - Index - Deploy   I wanna know how to update correctly. I mean, which servers we must to update first and how can i make a backup of our apps so we don´t lose something in the process.   Thank you very much.
When I try use : transforms.conf [setnull] REGEX = 192\.168\.1\.50, 172\.16\.1\.50 DEST_KEY = queue FORMAT = nullQueue  props.conf [cisco] TRANSFORMS-null = setnull  In event I get all... See more...
When I try use : transforms.conf [setnull] REGEX = 192\.168\.1\.50, 172\.16\.1\.50 DEST_KEY = queue FORMAT = nullQueue  props.conf [cisco] TRANSFORMS-null = setnull  In event I get all result. But when I use  only one ip its woks good. If any way for exclude more than one ip. 
Hi All, I'm looking to find all the latest entry of user, There should be no double entry for any user Profile ----->                   acc name: john                   accexpire: 2023/04/25   ... See more...
Hi All, I'm looking to find all the latest entry of user, There should be no double entry for any user Profile ----->                   acc name: john                   accexpire: 2023/04/25                    company: apple                     country:USA                    status:active                    statuschanged: 2023/03/12 how we can achieve this using spl query as I need this to load to the output lookup table dynamically.   thanks..                    
Hello Spelunkers Did anyone tried the approach to import services using recurring import job .  For example , using CSV as definition for ,  services - dependent services - entities - service tem... See more...
Hello Spelunkers Did anyone tried the approach to import services using recurring import job .  For example , using CSV as definition for ,  services - dependent services - entities - service template , then import the same on a schedule , to make sure no unwanted changes affect any service design .  I am facing issue while doing the same , "replace" update type not working , its always doing a "upsert" , which means if I remove any entity from CSV its not getting removed from the entity rule .. is this is a bug or Splunk don't have the conflict resolution applied for "Service import job" , only for "entity import" ...  let me know your views ......
After installing splunk 9 we have a problem with decoding ldap-events. We tried several apps but none of them gave us correct results. We wanted to use the app "Encode / Decode Data for Splunk" but... See more...
After installing splunk 9 we have a problem with decoding ldap-events. We tried several apps but none of them gave us correct results. We wanted to use the app "Encode / Decode Data for Splunk" but we can't find any instructions of how to use it. Does anyone have experience with base64 decoding in splunk 9?
Hi, we have a Data Model based search that we filter based on a lookup (with match_type WILDCARD) that matches different fields | tstats count, values(Processes.dest) as dest, dc(Processes.dest) a... See more...
Hi, we have a Data Model based search that we filter based on a lookup (with match_type WILDCARD) that matches different fields | tstats count, values(Processes.dest) as dest, dc(Processes.dest) as dest_dc, min(_time) as earliest, max(_time) as latest, values(Processes.user) as user, dc(Processes.user) as user_dc from datamodel=Endpoint.Processes by Processes.process_guid Processes.parent_process_guid Processes.parent_process Processes.parent_process_path Processes.process Processes.process_path Processes.process_hash Processes.user | rex field=Processes.process_hash "MD5=(?<process_md5>[A-Z0-9]+)" | `drop_dm_object_name(Processes)` | lookup sysmon_rules parent_process parent_process_path process process_path process_md5 OUTPUT description score This works well and saves uf from having multiple searches in place, but it would be great if there was something like a match_type REGEX for lookups.  We could then combine several entries in the lookup to one single line. For example those 4 lines: score,description,parent_process_path,parent_process,process_path,process,process_md5 80,Office: Execution MSHTA,C:\Program Files (x86)\Microsoft Office\root\Office*,*,*\mshta.exe,*,* 80,Office: Execution PWSH,C:\Program Files (x86)\Microsoft Office\root\Office*,*,*\powershell.exe,*,* 80,Office: Execution WSCRIPT,C:\Program Files (x86)\Microsoft Office\root\Office*,*,*\wscript.exe,*,* 80,Office: Execution CMD,C:\Program Files (x86)\Microsoft Office\root\Office*,*,*\cmd.exe,*,* could be combined to: score,description,parent_process_path,parent_process,process_path,process,process_md5 80,Office: Execution susp child,(?i)C:\Program Files (x86)\Microsoft Office\root\Office.*,.*,(cmd.exe|wscript.exe|powershell.exe|mshta.exe),.*,.* We want to keep the possibility to match against multiple fields. Is there a trick (using inputlookup, map, ...) to optimize this?We're at a point where the lookup is getting cluttered because of small variations of processes in the endpoint data model we would like to alert on.   Hints, tips & help are appreciated. Chris
I have splunk query which runs every 5 minutes and alert if certain keyword is not logged in index in last 5 minutes. But whenever there are splunk event delays it causes false alert to trigger. Is t... See more...
I have splunk query which runs every 5 minutes and alert if certain keyword is not logged in index in last 5 minutes. But whenever there are splunk event delays it causes false alert to trigger. Is there any way i can differentiate if keyword is really not logged or is it due to latency in events . I tried using (_indextime - _time) but this will only work once event is received.
Hi, I have a combination of consumer limits e.g, A=1000 b=500 c=500 d=200 rest=100 So basically i want a list of all consumer who reached 80% limit. So i wrote a query for one consumer. how can ... See more...
Hi, I have a combination of consumer limits e.g, A=1000 b=500 c=500 d=200 rest=100 So basically i want a list of all consumer who reached 80% limit. So i wrote a query for one consumer. how can i accommodate all other consumers with different limits. Should i write separate query for each consumer? sourcetype="OS" operation= "ABC" consumer_src="A" | search minutesago= 1 | stats count by consumer_src | where count >= 0.8 * 1000
I am building a query where I want to use a top 10 list of values from a lookup table, and then run a search against those entries (each entry in a different query). The basic search is something l... See more...
I am building a query where I want to use a top 10 list of values from a lookup table, and then run a search against those entries (each entry in a different query). The basic search is something like this:     | tstats count as count where index="myindex" id="some id value" by _time CAUSE_VALUE span=5m | timechart sum(count) as total_count span=5min     The query in the lookup table to provide the variable for the ID is something like this:     | inputlookup lookuptable.csv | sort 10 -dm | table oper, dm | transpose 10 | rename "row "* AS "value_in*" | eval top1=value_in1     TSTATS needs to be the first statement in the query, however with that being the case, I cant get the variable set before it. The perfect query should be something like this, however it is not (because of the above):     '''~ Set Variable top1 ~''' | inputlookup lookuptable.csv | sort 10 -dm | table oper, dm | transpose 10 | rename "row "* AS "value_in*" | eval top1=value_in1 '''~ Use Variable ~''' | tstats count as count where index="myindex" id=top1 by _time CAUSE_VALUE span=5m | timechart sum(count) as total_count span=5min     I did read some similar questions where it was suggested to use |where id= <whatever> but that doesn't work in my case because of the TSTATS. Any suggestions?
Hi, I have a policy.csv file with 2 columns: user                   tags Andre               IT Kleo                  Management Vlad                  Finance   And I also have an index... See more...
Hi, I have a policy.csv file with 2 columns: user                   tags Andre               IT Kleo                  Management Vlad                  Finance   And I also have an index=events with events and field "user" What I want is to count the number of events for each tag by user. for example: User          Tags         Count Andre         IT           55   I've tried this, but it counts the number of rows from the csv, not from index: |inputlookup policy.csv | join type=left tags [|tstats count where index=events by user] |stats count by user tags
Hi, I have a query that is searching over 4 different indexes (AIBA, AIBC, AIBP, AIBX) as follows: index=AIB* "Windows" EventCode="*" | stats count as NumOfLogs by dvc_NodeName | sort - NumOfLogs... See more...
Hi, I have a query that is searching over 4 different indexes (AIBA, AIBC, AIBP, AIBX) as follows: index=AIB* "Windows" EventCode="*" | stats count as NumOfLogs by dvc_NodeName | sort - NumOfLogs | eval Host = dvc_NodeName | table Host , NumOfLogs, index However, I need one of the fields on the outputted table to be "Index" but nothing is currently outputted in my current query. Can you please help? Thanks as always!
Hi all, I want to get data from an xml file from a selected source ( eg: Source_A, Source_B, ...). When there is no data found in xml file, is it possible to get data from the referenced xml sources... See more...
Hi all, I want to get data from an xml file from a selected source ( eg: Source_A, Source_B, ...). When there is no data found in xml file, is it possible to get data from the referenced xml sources and pick the latest one to display the data? The referenced sources are in a text file in the same location of selected source. The structure of folders look like this: D:\datasource\<source_name>\release.xml D:\datasource\<source_name>\referenced_sources.txt The referenced_source text file contains values seperated by commas. Example: Source_A,Source_B And my current SPL to retrieve data is: index=sample_index source=*$selected_source$* source="*.xml"   Thanks in advance
I have current search  index="intau_workfusion" host=* sourcetype="services_status.out.log" service="HTTP/1.1" status=* | chart count by status   when I run it and save as pie chart I am able ... See more...
I have current search  index="intau_workfusion" host=* sourcetype="services_status.out.log" service="HTTP/1.1" status=* | chart count by status   when I run it and save as pie chart I am able to get the percentage of the different status code status's, but I want it in table format and I cant divide by total when doing my eval status as all the other status codes still fall under "status" how can I solve this?
Hello! I'm trying to make a timechart day wise action by unique user for the proxy logs like this one below, but I'm unable add action field as column. Below query i had build . please suggest com... See more...
Hello! I'm trying to make a timechart day wise action by unique user for the proxy logs like this one below, but I'm unable add action field as column. Below query i had build . please suggest command to archive this requirement . _time Action/User  Raj Jane Tom 2023-03-11T00:00:00.000+0000 Permitted 1 1 1 2023-03-11T00:00:00.000+0000 Block 0 2 4 Query was build which generate above result without action column   | from datamodel:web | timechart span=1d count(actions) as Actions by user useother=0 limit=10 | addcoltotals
What is the minimal set of permissions for Users to create splunk studio dashboards? Backround info: i have users i want to give permissions to create splunk studio dashboards, but they can only c... See more...
What is the minimal set of permissions for Users to create splunk studio dashboards? Backround info: i have users i want to give permissions to create splunk studio dashboards, but they can only create classic with their current role set. the only role i have found to give them the ability is Admin_all_objects but this is problematic. there doesn't seem to be a role  for this in the Capabilities doc the https://docs.splunk.com/Documentation/SplunkCloud/9.0.2209/Security/Rolesandcapabilities Thanks Daniel