All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm using this code which gives me the information but I need current total users    (index=* OR index=_*) (index=* OR index=_*) index=wineventlog "ComputerName=sample_server" "EventCode=4624" | ... See more...
I'm using this code which gives me the information but I need current total users    (index=* OR index=_*) (index=* OR index=_*) index=wineventlog "ComputerName=sample_server" "EventCode=4624" | fields "_time", "user" | dedup "user"  (JFI -- I USED 15 MIN TIME FRAME)
Hi,  I have set up a scheduled report that runs every hour and writes the result set to a csv file. Activity->Jobs shows that the report was run per schedule, but I don't see the expected result in t... See more...
Hi,  I have set up a scheduled report that runs every hour and writes the result set to a csv file. Activity->Jobs shows that the report was run per schedule, but I don't see the expected result in the output csv. The job manager shows the search returned 0 events, but when I open the job link, I see more than 6000 results in the table. Why do the events in the show up as 0 in the job manager?  
I am looking to include certain fields that are in the contributing events for a certain Correlation Search/Notable.  The documentation I found: https://docs.splunk.com/Documentation/ES/latest/Adm... See more...
I am looking to include certain fields that are in the contributing events for a certain Correlation Search/Notable.  The documentation I found: https://docs.splunk.com/Documentation/ES/latest/Admin/Customizenotables This basically says you can add additional fields,  but this will apply to all Notables in Incident Review.  My question is if other notables that have different correlation searches don't include an additional field what happens?  Does it just not get displayed in that Notable or does it list the field with a null value in the Incident Review Dashboard?
Hi guys! I have a sourcetype "A" with some info about infrastructure. Host IP is one of this info. I have another sourcetype "B" (same index) that have a list of critical IPs. What I'm trying ... See more...
Hi guys! I have a sourcetype "A" with some info about infrastructure. Host IP is one of this info. I have another sourcetype "B" (same index) that have a list of critical IPs. What I'm trying to do is to use eval and IF to set a field value if the IP from sourcetype "A" are present on sourcetype "B". Fixing the IP value on the subsearch, the result is correct. index=lab | eval XPTO=if([ search index=lab sourcetype=B IP="192.168.1.2"],1,0) but I need to pass the IP dynamically from the main search. Something like...  index=lab | eval XPTO=if([ search index=lab sourcetype=B IP=$IP$],1,0) it's a simple question, like an Excel VLOOKUP function. Did you have a suggestion?
Hi I want to write the props for below logs. Actually the logs are coming with no timestamp and the file name having the timestamp.  These are the logs: Message Is: https POST failed: . Status Is... See more...
Hi I want to write the props for below logs. Actually the logs are coming with no timestamp and the file name having the timestamp.  These are the logs: Message Is: https POST failed: . Status Is: Ok Message Is: https POST successful: 200. Status Is: Ok Changed .Pac File to http://liteway.prog2.com/proxyins/proxy_client.oac Unable to change .Pac File to http://liteway.prog2.com/proxyins/proxy_client.oac File name coming like  zscalerhttp_2023-01-09-18-03-25 Can anyone help to write the props for this logs..   
I have two lookups. One lookup has Hostnames and IPs and the other has hostnames. I would like to run a search so I can take second lookup and see if its hostname is in the first, and if it is, show ... See more...
I have two lookups. One lookup has Hostnames and IPs and the other has hostnames. I would like to run a search so I can take second lookup and see if its hostname is in the first, and if it is, show the hostname and the IP in the search results. 
I have a lookup of hosts with a field Last_Scan_Datetime and the field values were formated using strftime(_time, "%Y-%m-%d-%H.%M.%S") . How would I go upon searching for hosts that were scanned in t... See more...
I have a lookup of hosts with a field Last_Scan_Datetime and the field values were formated using strftime(_time, "%Y-%m-%d-%H.%M.%S") . How would I go upon searching for hosts that were scanned in the last 3 days?  Everything I've found regarding searching with time has involved searching the index. 
Hello,  I downloaded a 60 day free trial version of Splunk Enterprise on Friday to use as part of training class.  Things worked well on Friday.  But I cannot login to the account this morning. A... See more...
Hello,  I downloaded a 60 day free trial version of Splunk Enterprise on Friday to use as part of training class.  Things worked well on Friday.  But I cannot login to the account this morning. Any help would be appreciated. Thank you, Cynthia
Hello, If I have a detector in Splunk Observability posting notifications to Slack, is it possible to configure it in order to send a bunch of similar alerts (e.g., 2-3 pods in the same Kubernetes ... See more...
Hello, If I have a detector in Splunk Observability posting notifications to Slack, is it possible to configure it in order to send a bunch of similar alerts (e.g., 2-3 pods in the same Kubernetes namespace aren't ready) as a single message, not one message per each pod?
Hi, I am able to run normal search using rest API using below syntax:   https://SearchHead_host:8089/servicesNS/admin/search/search/jobs/export -d search=“search index%3D_internal | head 3” -d out... See more...
Hi, I am able to run normal search using rest API using below syntax:   https://SearchHead_host:8089/servicesNS/admin/search/search/jobs/export -d search=“search index%3D_internal | head 3” -d output_mode=csv/xml/json   But when I use a "MULTISEARCH" query, the search doesn't show any results. Does anyone know how to overcome this? Thanks, Arun
Hi, we have Dev and Prod linux servers which contains Splunk agents. The infraestructure on Prod  (V 8.2.2.1) contains: - Heavy Forwarder - 3 Indexes  - Search Head - DS, LM, MC, SHCD Agent... See more...
Hi, we have Dev and Prod linux servers which contains Splunk agents. The infraestructure on Prod  (V 8.2.2.1) contains: - Heavy Forwarder - 3 Indexes  - Search Head - DS, LM, MC, SHCD Agent   Dev infraestructure (V 8.0.1) contains: - Search Head - Index - Deploy   I wanna know how to update correctly. I mean, which servers we must to update first and how can i make a backup of our apps so we don´t lose something in the process.   Thank you very much.
When I try use : transforms.conf [setnull] REGEX = 192\.168\.1\.50, 172\.16\.1\.50 DEST_KEY = queue FORMAT = nullQueue  props.conf [cisco] TRANSFORMS-null = setnull  In event I get all... See more...
When I try use : transforms.conf [setnull] REGEX = 192\.168\.1\.50, 172\.16\.1\.50 DEST_KEY = queue FORMAT = nullQueue  props.conf [cisco] TRANSFORMS-null = setnull  In event I get all result. But when I use  only one ip its woks good. If any way for exclude more than one ip. 
Hi All, I'm looking to find all the latest entry of user, There should be no double entry for any user Profile ----->                   acc name: john                   accexpire: 2023/04/25   ... See more...
Hi All, I'm looking to find all the latest entry of user, There should be no double entry for any user Profile ----->                   acc name: john                   accexpire: 2023/04/25                    company: apple                     country:USA                    status:active                    statuschanged: 2023/03/12 how we can achieve this using spl query as I need this to load to the output lookup table dynamically.   thanks..                    
Hello Spelunkers Did anyone tried the approach to import services using recurring import job .  For example , using CSV as definition for ,  services - dependent services - entities - service tem... See more...
Hello Spelunkers Did anyone tried the approach to import services using recurring import job .  For example , using CSV as definition for ,  services - dependent services - entities - service template , then import the same on a schedule , to make sure no unwanted changes affect any service design .  I am facing issue while doing the same , "replace" update type not working , its always doing a "upsert" , which means if I remove any entity from CSV its not getting removed from the entity rule .. is this is a bug or Splunk don't have the conflict resolution applied for "Service import job" , only for "entity import" ...  let me know your views ......
After installing splunk 9 we have a problem with decoding ldap-events. We tried several apps but none of them gave us correct results. We wanted to use the app "Encode / Decode Data for Splunk" but... See more...
After installing splunk 9 we have a problem with decoding ldap-events. We tried several apps but none of them gave us correct results. We wanted to use the app "Encode / Decode Data for Splunk" but we can't find any instructions of how to use it. Does anyone have experience with base64 decoding in splunk 9?
Hi, we have a Data Model based search that we filter based on a lookup (with match_type WILDCARD) that matches different fields | tstats count, values(Processes.dest) as dest, dc(Processes.dest) a... See more...
Hi, we have a Data Model based search that we filter based on a lookup (with match_type WILDCARD) that matches different fields | tstats count, values(Processes.dest) as dest, dc(Processes.dest) as dest_dc, min(_time) as earliest, max(_time) as latest, values(Processes.user) as user, dc(Processes.user) as user_dc from datamodel=Endpoint.Processes by Processes.process_guid Processes.parent_process_guid Processes.parent_process Processes.parent_process_path Processes.process Processes.process_path Processes.process_hash Processes.user | rex field=Processes.process_hash "MD5=(?<process_md5>[A-Z0-9]+)" | `drop_dm_object_name(Processes)` | lookup sysmon_rules parent_process parent_process_path process process_path process_md5 OUTPUT description score This works well and saves uf from having multiple searches in place, but it would be great if there was something like a match_type REGEX for lookups.  We could then combine several entries in the lookup to one single line. For example those 4 lines: score,description,parent_process_path,parent_process,process_path,process,process_md5 80,Office: Execution MSHTA,C:\Program Files (x86)\Microsoft Office\root\Office*,*,*\mshta.exe,*,* 80,Office: Execution PWSH,C:\Program Files (x86)\Microsoft Office\root\Office*,*,*\powershell.exe,*,* 80,Office: Execution WSCRIPT,C:\Program Files (x86)\Microsoft Office\root\Office*,*,*\wscript.exe,*,* 80,Office: Execution CMD,C:\Program Files (x86)\Microsoft Office\root\Office*,*,*\cmd.exe,*,* could be combined to: score,description,parent_process_path,parent_process,process_path,process,process_md5 80,Office: Execution susp child,(?i)C:\Program Files (x86)\Microsoft Office\root\Office.*,.*,(cmd.exe|wscript.exe|powershell.exe|mshta.exe),.*,.* We want to keep the possibility to match against multiple fields. Is there a trick (using inputlookup, map, ...) to optimize this?We're at a point where the lookup is getting cluttered because of small variations of processes in the endpoint data model we would like to alert on.   Hints, tips & help are appreciated. Chris
I have splunk query which runs every 5 minutes and alert if certain keyword is not logged in index in last 5 minutes. But whenever there are splunk event delays it causes false alert to trigger. Is t... See more...
I have splunk query which runs every 5 minutes and alert if certain keyword is not logged in index in last 5 minutes. But whenever there are splunk event delays it causes false alert to trigger. Is there any way i can differentiate if keyword is really not logged or is it due to latency in events . I tried using (_indextime - _time) but this will only work once event is received.
Hi, I have a combination of consumer limits e.g, A=1000 b=500 c=500 d=200 rest=100 So basically i want a list of all consumer who reached 80% limit. So i wrote a query for one consumer. how can ... See more...
Hi, I have a combination of consumer limits e.g, A=1000 b=500 c=500 d=200 rest=100 So basically i want a list of all consumer who reached 80% limit. So i wrote a query for one consumer. how can i accommodate all other consumers with different limits. Should i write separate query for each consumer? sourcetype="OS" operation= "ABC" consumer_src="A" | search minutesago= 1 | stats count by consumer_src | where count >= 0.8 * 1000
I am building a query where I want to use a top 10 list of values from a lookup table, and then run a search against those entries (each entry in a different query). The basic search is something l... See more...
I am building a query where I want to use a top 10 list of values from a lookup table, and then run a search against those entries (each entry in a different query). The basic search is something like this:     | tstats count as count where index="myindex" id="some id value" by _time CAUSE_VALUE span=5m | timechart sum(count) as total_count span=5min     The query in the lookup table to provide the variable for the ID is something like this:     | inputlookup lookuptable.csv | sort 10 -dm | table oper, dm | transpose 10 | rename "row "* AS "value_in*" | eval top1=value_in1     TSTATS needs to be the first statement in the query, however with that being the case, I cant get the variable set before it. The perfect query should be something like this, however it is not (because of the above):     '''~ Set Variable top1 ~''' | inputlookup lookuptable.csv | sort 10 -dm | table oper, dm | transpose 10 | rename "row "* AS "value_in*" | eval top1=value_in1 '''~ Use Variable ~''' | tstats count as count where index="myindex" id=top1 by _time CAUSE_VALUE span=5m | timechart sum(count) as total_count span=5min     I did read some similar questions where it was suggested to use |where id= <whatever> but that doesn't work in my case because of the TSTATS. Any suggestions?
Hi, I have a policy.csv file with 2 columns: user                   tags Andre               IT Kleo                  Management Vlad                  Finance   And I also have an index... See more...
Hi, I have a policy.csv file with 2 columns: user                   tags Andre               IT Kleo                  Management Vlad                  Finance   And I also have an index=events with events and field "user" What I want is to count the number of events for each tag by user. for example: User          Tags         Count Andre         IT           55   I've tried this, but it counts the number of rows from the csv, not from index: |inputlookup policy.csv | join type=left tags [|tstats count where index=events by user] |stats count by user tags