All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  I'm doing a query to return the text part of the log, but when using it on my dashboard it gives this error message: Value node <query> is not supposed to have children   my qu... See more...
  I'm doing a query to return the text part of the log, but when using it on my dashboard it gives this error message: Value node <query> is not supposed to have children   my query: index=... user Passed-Authentication earliest=@d | rex field=_raw "mdm-tlv=ac-user-agent=(?<message>.*?)," | table message   My dashboard: <panel> <single> <title>Meu titulo</title> <search> <query>index=... user Passed-Authentication earliest=@d | rex field=_raw "mdm-tlv=ac-user-agent=(?<message/>.*?)," | table message </query> </search> <option name="height">96</option> </single> </panel>  I believe the error is due to <message>, but I'm new to splunk  
How to integrate / connect macro to data model or CIM data model 
Hello all,  Did someone know the definition of  "rest.simpleRequest" function ? I'm trying to find how it works when we use it like this.  rest.simpleRequest(url, postargs=postargs, sessionKey=ke... See more...
Hello all,  Did someone know the definition of  "rest.simpleRequest" function ? I'm trying to find how it works when we use it like this.  rest.simpleRequest(url, postargs=postargs, sessionKey=key, raiseAllErrors=True)   thank y'all
I am collecting Firewall logs using OPSEC Lea app.  This add on is setup on Heavyforwarder.  App is setup correctly and logs are coming onto HF but I am unable to view them on Search head.  HF has co... See more...
I am collecting Firewall logs using OPSEC Lea app.  This add on is setup on Heavyforwarder.  App is setup correctly and logs are coming onto HF but I am unable to view them on Search head.  HF has connectivity to indexers is sending internal logs and other application related to indexers.  How do I check what is wrong with Checkpoint logs ?
In the latest Splunk Security Essentials 3.4.0, and previous release the Data Inventory detection in CIM+Event Size Introspection starts a query that will never complete due to an unmatched paranthes... See more...
In the latest Splunk Security Essentials 3.4.0, and previous release the Data Inventory detection in CIM+Event Size Introspection starts a query that will never complete due to an unmatched paranthesis.    The query is autogenerated, so I'm not sure if this is due to a misconfiguration on my part, or perhaps just a unwanted feature.   (index=main source=WinEventLog:Security) ) OR (index=main source=WinEventLog:Security ) | head 10000 | eval SSELENGTH = len(_raw) | eventstats range(_time) as SSETIMERANGE | fields SSELENGTH SSETIMERANGE tag | fieldsummary
Please confirm something or correct me. If I understand correctly, it's the event's _time that's the basis for bucket ageing (hot->warm->cold(->frozen)), right? I understand that it's typically des... See more...
Please confirm something or correct me. If I understand correctly, it's the event's _time that's the basis for bucket ageing (hot->warm->cold(->frozen)), right? I understand that it's typically designed this way for collecting event which have monotonicaly "growing" time. But what would happen if my source (regardless of the reason) generated events with a "random" timestamp? One could be from an old past (several years, maybe?), another from the future and so on. Would it mean that I'd have a chance to roll the buckets after just one or two events because I'd have sufficiently old events or sufficiently  big timespan in case of hot buckets?
Why doesn't threathunting index receive mapped data from sysmon (windows index)? By the way, I edited  the macro's to suit my environment but it still didn't work.
Hi I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. everyday I expect CLOSESESSION appear on my server logs, if one or more ser... See more...
Hi I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. everyday I expect CLOSESESSION appear on my server logs, if one or more server has no CLOSESESSION it means something going wrong. here is the spl: index="my_index" | rex field=source "(?<servername>\w+)." | rex "CLOSESESSION\:\s+(?<CLOSESESSION>\w+)" | table _time servername CLOSESESSION   Expected output: Servername     cause Server10           NOCLOSESESSION Server15            NOCLOSESESSION   any idea? Thanks,
Hi Fellows,   i try to have some statistics about AD user which their AD account will be expired in 7 days. I need help because  my request doesn't work as expected; I got the list of all user ac... See more...
Hi Fellows,   i try to have some statistics about AD user which their AD account will be expired in 7 days. I need help because  my request doesn't work as expected; I got the list of all user account. Do i need ldsearch instead of EventCode=4738 to get all users? the list displyed in only 1 month after, something like the relativetie function doesnt work correctly. index=* EventCode=4738 Account_Expires!="-" | eval is_interesting=strptime(Account_Expires,"%m/%d/%Y") | where is_interesting < relative_time(now(),"+7d@d") | table user status Account_Expires is_interesting  
  <?xml version="1.0" standalone="yes" ?> <SymCLI_ML> <Symmetrix> <Symm_Info> <symid>000197000225</symid> </Symm_Info> <Disk_Group> <Disk_Group_Info> <disk_group_nu... See more...
  <?xml version="1.0" standalone="yes" ?> <SymCLI_ML> <Symmetrix> <Symm_Info> <symid>000197000225</symid> </Symm_Info> <Disk_Group> <Disk_Group_Info> <disk_group_number>1</disk_group_number> <disk_group_name>GRP_1_3840_EFD_7R5</disk_group_name> <disk_location>Internal</disk_location> <disks_selected>17</disks_selected> <technology>EFD</technology> <speed>0</speed> <form_factor>N/A</form_factor> <hyper_size_megabytes>56940</hyper_size_megabytes> <hyper_size_gigabytes>55.6</hyper_size_gigabytes> <hyper_size_terabytes>0.05</hyper_size_terabytes> <max_hypers_per_disk>64</max_hypers_per_disk> <disk_size_megabytes>3644152</disk_size_megabytes> <disk_size_gigabytes>3558.7</disk_size_gigabytes> <disk_size_terabytes>3.48</disk_size_terabytes> <rated_disk_size_gigabytes>3840</rated_disk_size_gigabytes> <rated_disk_size_terabytes>3.75</rated_disk_size_terabytes> </Disk_Group_Info> <Disk_Group_Totals> <units>gigabytes</units> <total>60498.6</total> <free>0.0</free> <actual>60498.8</actual> </Disk_Group_Totals> </Disk_Group> <Disk_Group_Summary_Totals> <units>gigabytes</units> <total>60498.6</total> <free>0.0</free> <actual>60498.8</actual> </Disk_Group_Summary_Totals> </Symmetrix> </SymCLI_ML>     Have been trying to get this data sorted but unable to. need your kind help
Hi all, tabled results from a scheduled search are sent via email as a csv attached. Some rows could be very long so in some cases, when I open that csv file with Excel, I find some "split rows", I ... See more...
Hi all, tabled results from a scheduled search are sent via email as a csv attached. Some rows could be very long so in some cases, when I open that csv file with Excel, I find some "split rows", I would expect one unique line per row but instead sometimes I have half line positioned in the second column (as in the screenshot below). I'd like to obtain only one entire line per row,  so every event only in the first column of the Excel. The source search finds some events and tables some fields as result. Thanks in advance for any hint.
I have the following results returned by a search query: _time                                                        Id1                          Id2 2021-10-13 08:20:22.219     ABC471_1       845... See more...
I have the following results returned by a search query: _time                                                        Id1                          Id2 2021-10-13 08:20:22.219     ABC471_1       8456 2021-10-13 08:20:21.711     ABC471_8       8463 2021-10-13 08:20:16.112     ABC471_3       8458 However, I only receive an alert notification for the first result. My alert configuration is set up as follows: Settings Alert type                     Scheduled Time Range                Today Cron Expression      */5**** Expires                           24 hours Trigger Conditions Number of Results              >0 Trigger                                         For each result Throttle                                       Ticked Suppress results containing field value       Id2=$result.Id2$ Suppress triggering for   24 hours Trigger Actions Add to Triggered Alerts Send email I am expecting 3 emails to be generated for each of my search query results given that I am suppressing on Id2 which is different in each case.  However, I am just receiving the one alert as stated above. Can anyone advise me what I am dong wrong in this case? Thanks
Hi, I deployed Splunk distributed topology. Now my server Search Head has issue: KVStore is on failed state (it make app "Enterperise Security" failed too). I checked "/opt/splunk/var/log/splunk/sp... See more...
Hi, I deployed Splunk distributed topology. Now my server Search Head has issue: KVStore is on failed state (it make app "Enterperise Security" failed too). I checked "/opt/splunk/var/log/splunk/splunkd.log" and found the below logs: ========================================== 10-13-2021 18:14:03.127 +0700 ERROR DataModelObject - Failed to parse baseSearch. err=Error in 'inputlookup' command: External command based lookup 'correlationsearches_lookup' is not available because KV Store initialization has failed. Contact your system administrator., object=Correlation_Search_Lookups, baseSearch=| inputlookup append=T correlationsearches_lookup | eval source=_key | eval lookup="correlationsearches_lookup" | append [| `notable_owners`] | fillnull value="notable_owners_lookup" lookup | append [| `reviewstatuses`] | fillnull value="reviewstatuses_lookup" lookup | append [| `security_domains`] | fillnull value="security_domain_lookup" lookup | append [| `urgency`] | fillnull value="urgency_lookup" lookup 10-13-2021 18:14:30.350 +0700 ERROR KVStorageProvider - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling ismaster on '127.0.0.1:8191'] 10-13-2021 18:14:30.350 +0700 ERROR KVStoreAdminHandler - An error occurred. =============================== Could anyone help me to troubleshoot this issue to solve it? Thanks so much!  
Dear Splunk Community, I have the following statistics table and corresponding column chart that show the amount of errors per server: And Right now I am changing the colors on the column ... See more...
Dear Splunk Community, I have the following statistics table and corresponding column chart that show the amount of errors per server: And Right now I am changing the colors on the column chart based on a static value like so: index="myIndex" host="myHostOne*" OR host="myHostTwo*" source="mySource" ERROR NOT WARN CTJT* | table host, errors | eval errors = host | stats count by host | eval redCount = if(count>50,count,0) | eval yellowCount = if(count<=50 AND count>25,count,0) | eval greenCount = if(count <=25,count,0) | fields - count | dedup host | rex field=host mode=sed "s/\..*$//" | sort host asc | rename host AS "Servers" | rename count AS "Foutmeldingen" The above uses a time range of "last 24 hours". I would like to change the colors of the bars (green, yellow, red) when a certain percentage of errors has been reached, based on the average of last week. To summarize, I would like to: Somehow get the average amount of errors per server per day from the last 7 days Then specify a percentage for each color (e.g if the amount of errors today are 25% more than the average of last week, make the bar red) I have no idea on how to do this, can anyone help? Thanks in advance.    
Greetings!! I need your help on where I can find the retention configuration for Splunk syslog receiver through command line and how can change to only receive the logs for 1day not 2days, here I me... See more...
Greetings!! I need your help on where I can find the retention configuration for Splunk syslog receiver through command line and how can change to only receive the logs for 1day not 2days, here I mean to do retention so that I can change where it is receiving and store logs for 2 days, NOW I WANT to do retention to only receive for 1 day only, KINDLY HELP ME AND GUIDE ME HOW I CAN DO THIS?   Splunk receiver it is in opt directory where i receive the syslog logs for different network  devices , storing those logs for days then after logs are deleted, BUT I want only to receive all logs coming from different devices and when day finished at midnight can delete that logs in splunk receiver after being indexed into indexers. Kindly help me on this as I want to avoid that the receiver storage run out of space. Thank you in advance!
 
Hi team, I have below kind of data in splunk, it contains 3 fields ISRF, DSRF and DSFF.  they are all multi-value fields.   2021-10-13 19:26:46,813 ISRF="[fullName,managerFullName,title,userName,d... See more...
Hi team, I have below kind of data in splunk, it contains 3 fields ISRF, DSRF and DSFF.  they are all multi-value fields.   2021-10-13 19:26:46,813 ISRF="[fullName,managerFullName,title,userName,division,department,location]" DSRF="[fullName,managerFullName,title,userName,division,department,location]" DSFF="[managerFullName,division,department,location,jobCodereasonForLeaving]" 2021-10-12 19:32:31,504 ISRF="[fullName,managerFullName,userName,division,department,location]" C_DSRF="[fullName,managerFullName,title,userName,division,department,location]" DSFF="[managerFullName,division,department,location,custom05,jobCode,riskOfLoss,impactOfLoss,reasonForLeaving]" ...... ......     I expect the report like below format: fields count Of ISRF count Of DSRF count of DSFF fullName 2 2 0 managerFullName 2 2 2 title 1 2 0 ......       ......       resonForLeaving 0 0 1   I am trying below queries, and I am blocked how to continue for getting expected format table.   <baseQuery> |eval includeSearchResultField=replace(replace(C_ISRF,"\[",""),"\]",""), defaultSearchResultField=replace(replace(C_DSRF,"\[",""),"\]",""), filterFields=replace(replace(C_DSFF,"\[",""),"\]","") |makemv delim="," includeSearchResultField |makemv delim="," defaultSearchResultField |makemv delim="," filterFields    
Hello All,  Can someone help me to build a search query for the below use case ?   My use case is to detect if any S3 buckets have been set for Public access via PutBucketPolicy event.  I need ... See more...
Hello All,  Can someone help me to build a search query for the below use case ?   My use case is to detect if any S3 buckets have been set for Public access via PutBucketPolicy event.  I need this query to show results only if  the fields  Effect and Principal both have values  "Allow"  and " *  or {AWS:*} "  respectively for the same SID.   Basically the following 2 conditions must be met for a particular SID. Effect: Allow Principal:  *  OR {AWS:*} ----------------------- The Raw event data however has 2 SIDs  ( MustBeEncryptedInTransit and Cloudfront Access)  as shown below and each one has conflicting values of Effect & Principal.     eventName": "PutBucketPolicy" "awsRegion": "us-east-1" "sourceIPAddress": "x.x.x.x" "userAgent": "[<some agent>]" "requestParameters": {"bucketPolicy": {"Version": "2012-10-17" "Statement": [{"Sid": "MustBeEncryptedInTransit" "Effect": "Deny" "Action": "s3:*" "Resource": ["arn:aws:s3:::<Bucket_Name>/*" "arn:aws:s3:::<Bucket_Name>"] "Principal": "*" "Condition": {"Bool": {"aws:SecureTransport": ["false"]}}} {"Sid": "Cloudfront Access" "Effect": "Allow" "Action": "s3:*" "Resource": "arn:aws:s3::<Bucket_Name>/*" "Principal": {"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXX"}}]} "bucketName": "<Bucket_Name>" "Host": "<SomeHost_Name>" "policy": ""}     Now, if i try the below search, it generates False Positives because the raw data has everything in the same event:   Effect = Allow , Effect = Deny, Principal = *   and 2 values of SID   sourcetype=aws:cloudtrail eventName IN(PutBucketPolicy) userName="abcd" requestParameters.bucketPolicy.Statement{}.Effect = "Allow" requestParameters.bucketPolicy.Statement{}.Principal = "*" requestParameters.bucketPolicy.Statement{}.Sid = "Cloudfront Access"   I am just lost as in how to build an eval statement to check if SID = CloudFront Access or SID!=MustBeEncryptedInTransit  only then check for values of Effect and Principal. Hope i am clear.  If you all have better suggestions to check for pubic access using Putbucketpolicy or ACL let me know
Hi I want to know when index process is done for zip files through the web ui. I have couple of huge zip files that every day copy in /opt, and continiously index this path in splunk. now I want t... See more...
Hi I want to know when index process is done for zip files through the web ui. I have couple of huge zip files that every day copy in /opt, and continiously index this path in splunk. now I want to know when exactly index process is done for this path in splunk web ui (not cli) shown precentage or process. Any idea? thanks
Hello Splunk Community, Can anyone help me build a query based on the below; I have a batch job that has multiple steps logged as separate events. How can I calculate the total duration of the batc... See more...
Hello Splunk Community, Can anyone help me build a query based on the below; I have a batch job that has multiple steps logged as separate events. How can I calculate the total duration of the batch job (Step 1 Start - Step 5 End). Example of my output format (Dummy Data Used): Step Start_Time End_Time Duration (Hours) 1 2021-09-11 22:45:00 2021-09-11 22:45:01 00:00:01 2 2021-09-11 22:45:01 2021-09-11 22:45:20 00:00:19 3 2021-09-11 22:45:20 2021-09-11 22:58:15 00:12:55 4 2021-09-11 22:58:15 2021-09-11 22:58:39 00:00:24 5 2021-09-11 22:58:39 2021-09-11 24:20:31 01:21:52   THANK YOU!