All Topics

Top

All Topics

Greetings!! I need your help on where I can find the retention configuration for Splunk syslog receiver through command line and how can change to only receive the logs for 1day not 2days, here I me... See more...
Greetings!! I need your help on where I can find the retention configuration for Splunk syslog receiver through command line and how can change to only receive the logs for 1day not 2days, here I mean to do retention so that I can change where it is receiving and store logs for 2 days, NOW I WANT to do retention to only receive for 1 day only, KINDLY HELP ME AND GUIDE ME HOW I CAN DO THIS?   Splunk receiver it is in opt directory where i receive the syslog logs for different network  devices , storing those logs for days then after logs are deleted, BUT I want only to receive all logs coming from different devices and when day finished at midnight can delete that logs in splunk receiver after being indexed into indexers. Kindly help me on this as I want to avoid that the receiver storage run out of space. Thank you in advance!
 
Hi team, I have below kind of data in splunk, it contains 3 fields ISRF, DSRF and DSFF.  they are all multi-value fields.   2021-10-13 19:26:46,813 ISRF="[fullName,managerFullName,title,userName,d... See more...
Hi team, I have below kind of data in splunk, it contains 3 fields ISRF, DSRF and DSFF.  they are all multi-value fields.   2021-10-13 19:26:46,813 ISRF="[fullName,managerFullName,title,userName,division,department,location]" DSRF="[fullName,managerFullName,title,userName,division,department,location]" DSFF="[managerFullName,division,department,location,jobCodereasonForLeaving]" 2021-10-12 19:32:31,504 ISRF="[fullName,managerFullName,userName,division,department,location]" C_DSRF="[fullName,managerFullName,title,userName,division,department,location]" DSFF="[managerFullName,division,department,location,custom05,jobCode,riskOfLoss,impactOfLoss,reasonForLeaving]" ...... ......     I expect the report like below format: fields count Of ISRF count Of DSRF count of DSFF fullName 2 2 0 managerFullName 2 2 2 title 1 2 0 ......       ......       resonForLeaving 0 0 1   I am trying below queries, and I am blocked how to continue for getting expected format table.   <baseQuery> |eval includeSearchResultField=replace(replace(C_ISRF,"\[",""),"\]",""), defaultSearchResultField=replace(replace(C_DSRF,"\[",""),"\]",""), filterFields=replace(replace(C_DSFF,"\[",""),"\]","") |makemv delim="," includeSearchResultField |makemv delim="," defaultSearchResultField |makemv delim="," filterFields    
Hello All,  Can someone help me to build a search query for the below use case ?   My use case is to detect if any S3 buckets have been set for Public access via PutBucketPolicy event.  I need ... See more...
Hello All,  Can someone help me to build a search query for the below use case ?   My use case is to detect if any S3 buckets have been set for Public access via PutBucketPolicy event.  I need this query to show results only if  the fields  Effect and Principal both have values  "Allow"  and " *  or {AWS:*} "  respectively for the same SID.   Basically the following 2 conditions must be met for a particular SID. Effect: Allow Principal:  *  OR {AWS:*} ----------------------- The Raw event data however has 2 SIDs  ( MustBeEncryptedInTransit and Cloudfront Access)  as shown below and each one has conflicting values of Effect & Principal.     eventName": "PutBucketPolicy" "awsRegion": "us-east-1" "sourceIPAddress": "x.x.x.x" "userAgent": "[<some agent>]" "requestParameters": {"bucketPolicy": {"Version": "2012-10-17" "Statement": [{"Sid": "MustBeEncryptedInTransit" "Effect": "Deny" "Action": "s3:*" "Resource": ["arn:aws:s3:::<Bucket_Name>/*" "arn:aws:s3:::<Bucket_Name>"] "Principal": "*" "Condition": {"Bool": {"aws:SecureTransport": ["false"]}}} {"Sid": "Cloudfront Access" "Effect": "Allow" "Action": "s3:*" "Resource": "arn:aws:s3::<Bucket_Name>/*" "Principal": {"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXX"}}]} "bucketName": "<Bucket_Name>" "Host": "<SomeHost_Name>" "policy": ""}     Now, if i try the below search, it generates False Positives because the raw data has everything in the same event:   Effect = Allow , Effect = Deny, Principal = *   and 2 values of SID   sourcetype=aws:cloudtrail eventName IN(PutBucketPolicy) userName="abcd" requestParameters.bucketPolicy.Statement{}.Effect = "Allow" requestParameters.bucketPolicy.Statement{}.Principal = "*" requestParameters.bucketPolicy.Statement{}.Sid = "Cloudfront Access"   I am just lost as in how to build an eval statement to check if SID = CloudFront Access or SID!=MustBeEncryptedInTransit  only then check for values of Effect and Principal. Hope i am clear.  If you all have better suggestions to check for pubic access using Putbucketpolicy or ACL let me know
Hi I want to know when index process is done for zip files through the web ui. I have couple of huge zip files that every day copy in /opt, and continiously index this path in splunk. now I want t... See more...
Hi I want to know when index process is done for zip files through the web ui. I have couple of huge zip files that every day copy in /opt, and continiously index this path in splunk. now I want to know when exactly index process is done for this path in splunk web ui (not cli) shown precentage or process. Any idea? thanks
Hello Splunk Community, Can anyone help me build a query based on the below; I have a batch job that has multiple steps logged as separate events. How can I calculate the total duration of the batc... See more...
Hello Splunk Community, Can anyone help me build a query based on the below; I have a batch job that has multiple steps logged as separate events. How can I calculate the total duration of the batch job (Step 1 Start - Step 5 End). Example of my output format (Dummy Data Used): Step Start_Time End_Time Duration (Hours) 1 2021-09-11 22:45:00 2021-09-11 22:45:01 00:00:01 2 2021-09-11 22:45:01 2021-09-11 22:45:20 00:00:19 3 2021-09-11 22:45:20 2021-09-11 22:58:15 00:12:55 4 2021-09-11 22:58:15 2021-09-11 22:58:39 00:00:24 5 2021-09-11 22:58:39 2021-09-11 24:20:31 01:21:52   THANK YOU!
Hello Splunk Community,  Can anyone help me build a query based on the below; I want to convert a field (Fri Oct 8 23:15:05 AEDT 2021) into time format & then calculate the duration by subtracting ... See more...
Hello Splunk Community,  Can anyone help me build a query based on the below; I want to convert a field (Fri Oct 8 23:15:05 AEDT 2021) into time format & then calculate the duration by subtracting the end time by the start time.  Appreciate your help
Hi, I have another request similar to my previous post but with a variation Here is the multi-valued field ColY. ColY has only two values ON or OFF. I need to find all rows which changed values fro... See more...
Hi, I have another request similar to my previous post but with a variation Here is the multi-valued field ColY. ColY has only two values ON or OFF. I need to find all rows which changed values from ON to OFF or vice-versa in any order. Below is the example ColX ColY A123456 ON ON ON A123457 ON OFF ON OFF A123458 ON ON OFF ON ON ON OFF A123459 OFF OFF OFF A123460 ON ON ON OFF OFF OFF   Required output ColX ColY totalChanges A123457 ON OFF ON OFF 3 A123458 ON ON OFF ON ON ON OFF 3 A123460 ON ON ON OFF OFF OFF 1
Hi All, Can someone help me with the following  ColY represents multi-value field. I want to search all rows which have null, 0 and someother values in ColY Based on the below example output rows ... See more...
Hi All, Can someone help me with the following  ColY represents multi-value field. I want to search all rows which have null, 0 and someother values in ColY Based on the below example output rows should be for A123456, A123461  ColX ColY A123456 null 0 56789 987654 A123457 4332 A123458 54322 0 A123459 null 0 A123460 2345667 7665443 A123461 null 788765 0 A123462 876543 null    
I want to add a dropdown selection menu inside a dashboard panel using html and then based on the selected item from the dropdown , need to change the search query accordingly and results are updated... See more...
I want to add a dropdown selection menu inside a dashboard panel using html and then based on the selected item from the dropdown , need to change the search query accordingly and results are updated in the chart visualization. Example:  Select Month:   All, Jan, Feb, ...Dec. Now when user selects all, the query will display results for entire year (jan -dec) and when jan is selected then chart is displayed only showing the entire january data. Similarly for others. I have other different panels within the dashboard under differet rows, and  I need to insert the drop down menu only for a specific panel. Also, my base search remains same irrespective of the drop-down selection, only the panel search string changes with change in the dropdown selection. - any suggestion or guide would be a great help. It would be nice if it can be done with no or minimal use of JavaScript. TIA thanks
We know the amount of data ingested daily from the Splunk internal logs and the License dashboard, but we're trying to find if there's a way to find the amount of data purged on daily based on the ou... See more...
We know the amount of data ingested daily from the Splunk internal logs and the License dashboard, but we're trying to find if there's a way to find the amount of data purged on daily based on the our data retention policy.  Appreciate any help on this. 
Hi All,            As part of one of my SRE objectives I was trying to find out the following in splunk.  The High(Max) count of ERRORs within a given time period (1hr / 24hr/ 144hr) compared to mo... See more...
Hi All,            As part of one of my SRE objectives I was trying to find out the following in splunk.  The High(Max) count of ERRORs within a given time period (1hr / 24hr/ 144hr) compared to monthly 99 Percentile I was starting off with baby steps assuming that the count is obviously zooming in on anything 'ERROR' index=myIndex ERROR source="/test.log" | timechart count by status | addtotals | addtotals fieldname=ERROR | eval ErrorRate=round(Errors/Total*100,2) | fields _time 5* ErrorRate But that doesn't seem to even work.  Help would be really appropriated .   Thanks in advance team !   
I created an Authentication data model that has default, Insecure, and Priviledge Authentication Data model. It also uses action=success and action=failures.  Please see screenshot below:   I c... See more...
I created an Authentication data model that has default, Insecure, and Priviledge Authentication Data model. It also uses action=success and action=failures.  Please see screenshot below:   I can see the data coming in from different sources but the issue is  that we have so many windows authentication failures. Please how can I fix this configurations issues? Has anybody come across such issues?
Hi,  I created the Splunk cloud free trial account yesterday, when I logged in with the mail logins, I was prompt to change it which I did, but today am trying to login to the cloud service using th... See more...
Hi,  I created the Splunk cloud free trial account yesterday, when I logged in with the mail logins, I was prompt to change it which I did, but today am trying to login to the cloud service using the same logins I created yesterday and it now shows me access denied with no area on where one can even go about retrieving the login details.   I need help on how to retrieve my cloud login details please ..
This is really a log4net question but I'm hoping the folks here can help; I have been unsuccessful at searching online for a solution. ----------------- We have a custom application which generates... See more...
This is really a log4net question but I'm hoping the folks here can help; I have been unsuccessful at searching online for a solution. ----------------- We have a custom application which generates local logs in JSON format via the log4net module. We then have a Splunk UF installed to collect said logs. In general that all works fine. The problem is that some log messages include a nested JSON 'message' field -- but log4net is misformatting it as a string and so Splunk doesn't parse the nested part. You can see the issue (below) where log4net is unnecessarily adding quote-marks around the nested part: CURRENT/INVALID   "message":"{"command":"Transform271ToBenefitResponse","ms":1}"   PROPER   "message":{"command":"Transform271ToBenefitResponse","ms":1}   -------------------------- I'm not entirely sure of the log4net configuration but here's what I was told by one of our developers: ORIGINAL LOG4NET CONFIG <conversionPattern value="%utcdate [%property{CorrelationId}] [%property{companyId}] [%property{userId}] [%thread] [%level] %logger - %message%newline" /> UPDATED CONFIG; STILL FAILS <conversionPattern value="{&quot;date&quot;:&quot;%date{ISO8601}&quot;, &quot;correlationId&quot;:&quot;%property{CorrelationId}&quot;, &quot;companyId&quot;:&quot;%property{companyId}&quot;, &quot;userId&quot;:&quot;%property{userId}&quot;, &quot;thread&quot;:&quot;%thread&quot;, &quot;level&quot;:&quot;%level&quot;, &quot;logger&quot;:&quot;%logger&quot;, &quot;message&quot;:&quot;%message&quot;}%newline" />                
I have filed called serial_id which have value ABC2022100845001  I need count with contain 45  in last 5 & 6 th bytes 
I have 2 major questions: 1) I have 2 Sourcetypes A and B with 2 important Fields Category and Enviroment.  I want to compare all of the Category and Environment from Sourcetype A to Sourcetype B an... See more...
I have 2 major questions: 1) I have 2 Sourcetypes A and B with 2 important Fields Category and Enviroment.  I want to compare all of the Category and Environment from Sourcetype A to Sourcetype B and then return Results that are common to both sourcetype.   2)  I have 2 Sourcetypes A and B with 2 important Fields Category and Enviroment.  I want to compare all of the Category and Environment from Sourcetype A to Sourcetype B and then return Results that does not show up on both sourcetypes.      
I am pulling data from multiple locations and a new field threshold has been introduced. The issue is threshold  is common but has different values depending if it is cpuPerc or memoryCons  etc.. Th... See more...
I am pulling data from multiple locations and a new field threshold has been introduced. The issue is threshold  is common but has different values depending if it is cpuPerc or memoryCons  etc.. There are 4 of them.   | mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold     On option I have if i want to display all the thresholds differently is to write a 3 joins - but this is heavy on CPU.   | mstats min("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_cpuPerc | join _time service.name replica.name [| mstats min("mx.process.threads") as nbOfThreads WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_nbOfThreads ] | join _time service.name replica.name [| mstats min("mx.process.memory.usage") as memoryCons WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_memoryCons ] | join _time service.name replica.name [| mstats min("mx.process.file_descriptors") as nbOfOpenFiles WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_nbOfOpenFiles ]   I am trying this way but I am not sure how to merge the rows by time in the end - any ideas.   | mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | search service_name IN (*cache*) | eval threshold_nbOfThreads=if(isnull(nbOfThreads),"",$threshold$) | eval threshold_memoryCons=if(isnull(memoryCons),"",$threshold$) | eval threshold_nbOfOpenFiles=if(isnull(nbOfOpenFiles),"",$threshold$) | table _time threshold threshold_nbOfOpenFiles threshold_memoryCons threshold_nbOfThreads   The issue is the data is now on different rows  - where I need them to be on the same by _time. SO in the image below, we have 3 lines for each time. How can i get them to merge per timestamp?    
I am new to splunk cloud and I would like to install an enterprise security app  ( below screenshot) on my splunk.  and after open the app its should be like below  and finally, I ... See more...
I am new to splunk cloud and I would like to install an enterprise security app  ( below screenshot) on my splunk.  and after open the app its should be like below  and finally, I should be able to see the below screen. Can anyone please help me with this - If you have any doubts about my question please let me know Thanks in advance.
Hi All, I am onboarding data from a heavy forwarder using Splunk TA.  Is it possible to 1) index all logs into one index and route to group A indexers  2) index subset of logs into another index ... See more...
Hi All, I am onboarding data from a heavy forwarder using Splunk TA.  Is it possible to 1) index all logs into one index and route to group A indexers  2) index subset of logs into another index and route to group B indexers? Thanks.