All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can someone help me to build a search query for the below use case ?   My use case is to detect if any S3 buckets have been set for Public access via PutBucketPolicy event. So far thanks to help f... See more...
Can someone help me to build a search query for the below use case ?   My use case is to detect if any S3 buckets have been set for Public access via PutBucketPolicy event. So far thanks to help from Folks ( @ITWhisperer and @isoutamo  on this Community,   i have got my search to check for fields  Effect and Principal both should have values  "Allow"  and " *  or {AWS:*} "  respectively for the same SID.   Basically the following 2 conditions must be met for a particular SID. Effect: Allow Principal:  *  OR {AWS:*} -----------------------  Next i want to further filter based on the field "Condition" .   How to just filter if "Condition" exists or not ? Below is a snippet of raw event data           "eventName": "PutBucketPolicy" "awsRegion": "us-east-1" "sourceIPAddress": "N.N.N.N" "userAgent": "[S3Console/0.4 aws-internal/3 aws-sdk-java/1.11.1002 Linux/5.4.129-72.229.amzn2int.x86_64]" "requestParameters": {"bucketPolicy": {"Version": "2012-10-17" "Statement": [{"Sid": "Access-to-specific-VPCE-only" "Effect": "Allow" "Principal": "*" "Action": "s3:*" "Resource": "arn:aws:s3:::abc-logs/*" "Condition": {"StringEquals": {"aws:sourceVpce": "XXX"}}}] "Id": "Policy14151152"} "bucketName": "Bucket-name" "Host": "host.xyz.com" "policy": ""} ============= "eventName": "PutBucketPolicy" "awsRegion": "us-east-1" "sourceIPAddress": "N.N.N.N" "userAgent": "[S3Console/0.4 aws-internal/3 aws-sdk-java/1.11.1002 Linux/5.4.116-64.217.amzn2int.x86_64 OpenJDK_64-Bit_Server_VM/Oracle_Corporation cfg/retry-mode/legacy]" "requestParameters": {"bucketPolicy": {"Version": "2012-10-17" "Statement": [{"Effect": "Allow" "Principal": "*" "Action": ["s3:List*" "s3:Get*"] "Resource": "arn:aws:s3::/*" "Condition": {"IpAddress": {"aws:SourceIp": ["N.N.N.N" "N.N.N.N"]}}}]} "bucketName": "bucket-name" "Host": "abc.xyz.com" "policy": ""}           I have tried the below 3 options to check for the presence of the field Condition  , but none are working.  These end up showing Events where the raw data contains a Condition defined.  I want my search to not exclude those events which contain Condition         | spath requestParameters.bucketPolicy.Statement{} output=Statement | mvexpand Statement | spath input=Statement | where Effect="Allow" | where Principal="*" OR Principal.AWS="* | where isnull(Condition) OR | where Condition="" OR |search Condition=""                      
Hi what is the rex for this field1=this is message here is the log: 00:09:59.990 app module: AB[0000]: Data[{"code":"OK","messageEn":"this is message","messageCa":null,"id":"0"} Thanks,
I'm trying to extract the data from logs and display the count  based on 2 fields. Below are the sample data logs, 14:48:23.668 INFO - Response(Uuid=1e850916-f99d-1e35a8d3c474, pojo=[Pojo(id=ID0047... See more...
I'm trying to extract the data from logs and display the count  based on 2 fields. Below are the sample data logs, 14:48:23.668 INFO - Response(Uuid=1e850916-f99d-1e35a8d3c474, pojo=[Pojo(id=ID0047, flg=false), Pojo(id=ID0065, flg=false), Pojo(id=ID0105, flg=true), Pojo(id=ID0106, flg=true), Pojo(id=ID0066, flg=false), Pojo(id=ID0108, flg=false)]) 14:48:23.676 INFO - Response(Uuid=c5ec43a2-8c07-c56f9f5bbd1f, pojo=[Pojo(id=ID0106, flg=false), Pojo(id=ID0107, flg=false), Pojo(id=ID0068, flg=true), Pojo(id=ID0105, flg=false), Pojo(id=ID0064, flg=true), Pojo(id=ID0108, flg=false), Pojo(id=ID0047, flg=false)]) 14:48:23.690 INFO - Response(Uuid=eac5f53e-6407-eac356ca0458, pojo=[Pojo(id=ID0107, flg=false), Pojo(id=ID0047, flg=true), Pojo(id=ID0067, flg=false), Pojo(id=ID0106, flg=false), Pojo(id=ID0068, flg=false), Pojo(id=ID0108, flg=false)])   Below is the current query , <base query | rex field=pojo max_match=0 "Pojo\((?<ID>.*?)\,(?<FLG>.*?)\)" | chart count by ID FLG>   If i'm using one field in count by its giving the correct count ( ID / FLG ) but when i'm use both its not giving correct count as in query.   Sample expected output looks like below, ID   FLG=false   FLG=true ID0047   2   1 ID0107   2   0 ID0065   1   0 .. Kindly help or suggest me.
Is it possible to place the pagination buttons on the top of a dashboard panel rather than have them appear at the bottom of a panel?
Hi,   I am hoping to get some help in creating a search, which will be turned into an alert - I am working with system logs from a monitoring device, where a log is submitted when any one of ~600 s... See more...
Hi,   I am hoping to get some help in creating a search, which will be turned into an alert - I am working with system logs from a monitoring device, where a log is submitted when any one of ~600 servers go down and while the server stays down a new log is dropped every ~10 mins, then if the server comes back up a "Reconnect" log is submitted. I am wanting to get the search to return me the name of a server/agent that has had at least 1 "disconnect" but no "reconnect" entry within a time period and then once a reconnect is received - the server is no longer listed. I am not very experienced with Splunk and thus far only have a search that is returning me counts of both types of events (connect/disconnect): index="XXXlogs" sourcetype="systemlog" eventid="*connectserver" devicename="device1" logdescription="Agent*" |  stats count by win_server, event_id Any help is appreciated.
Hi, I just installed the Configuration explorer in order to edit my transforms.conf. First I edited the settings file to set write-access = true . I then restarted splunk. When I now try to edit the ... See more...
Hi, I just installed the Configuration explorer in order to edit my transforms.conf. First I edited the settings file to set write-access = true . I then restarted splunk. When I now try to edit the transforms.conf I cant save my changes. An error message appers saying: This file cannot be saved. Is there another way to edit the file or how can I enable writing via the conf explorer?
hi  what is rex for these three fields? here is the log: 2021-10-14 12:51:20,412 INFO [APP] log in : A12345@#4321@california 2021-10-14 12:51:20,412 INFO [APP] log in : D12345@torrento 2021-10-1... See more...
hi  what is rex for these three fields? here is the log: 2021-10-14 12:51:20,412 INFO [APP] log in : A12345@#4321@california 2021-10-14 12:51:20,412 INFO [APP] log in : D12345@torrento 2021-10-14 12:51:20,412 INFO [APP] log in : B12345@#1234@newyork field1=A12345 D12345 B12345 field2=4321 1234 field3=california torrento newyork   thanks
Hello together, we moved our data to a new index cluster and since then we are unable to delete events with the "| delete" query. We have an test system, which is a single server instance that will e... See more...
Hello together, we moved our data to a new index cluster and since then we are unable to delete events with the "| delete" query. We have an test system, which is a single server instance that will execute the same query. Datasets are identical on both systems. Heres a sample command we are trying to run on our clustered server: index=name1 sourcetype=type1 earliest_time=-3d | delete Since the documentation also noted that sometimes you should eval the indexname to delete events, we also did that index=name1 sourcetype=type1 earliest_time=-3d | eval index=name1 | delete   Both queries without the delete command only return a small set of 8 events. If we pipe the result to "delete", then there's no error message or warning. However the returned result table shows that zero files have been deleted. Currently we do have a new search cluster and also our old single search head connected to this index cluster. The old single searchhead was previously also the single instance where we migrated our data from to the new index cluster. Despite that migration nothing has been changed on that servers user/role configuration. Still delete is not working anymore on that search head too.   We did follow all instructions on the splunk documentation to ensure that it is not a configuration problem https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Delete   Additionally we did the following to troubleshoot the delete process: We tried other datasets/indexes on our cluster server -> same result (working on test server) We checked that our user has “can_delete” roles + created new local users with “can_delete” role Both without success. We also noticed that if the user has no “can_delete” role assigned, the query result will also notify that permissions are missing Since we don’t get that message, we believe that the role is set correctly We compared the authorize.conf from our test and cluster system and didn’t see any differences for those roles We checked all servers splunkd logs after sending the delete command and no information/errors are available We checked that on the file system the bucket folders/files have the correct access permissions (rwx) for “splunk” user - We restarted the index cluster We tried the search query directly on the cluster master, on each search head cluster member and on the old single search head of our clustered system We ran splunk healthcheck with no issues We checked bucket status for the index cluster We checked monitoring console for indexers with no issues We ran | dbinspect for the index and checked if the listed filesystem paths are accessible by the splunk user We ran the search queries in the terminal via “splunk cli”, with no errors or additional messages being shown Both test and cluster servers are running on the same version (8.1.6) The data from the query was also indexed far after the migration
I would like to find all unused serverclasses on a deployment server.
Hello, I am some issues in writing field extraction expression for following events (3 sample events are given below). Each of the events has  comma Separated 14 field values. Most of the cases eve... See more...
Hello, I am some issues in writing field extraction expression for following events (3 sample events are given below). Each of the events has  comma Separated 14 field values. Most of the cases event doesnot have all field values (i.e., no values between 2 commas)    I was trying with this expression  ^(?P<Field1>\w+),(?P<Field2>\w+),(?P<Field3>\w+),(?P<Field4>\w+), But it stuck at Field4 as it doesn't have any values (i.e., no values between 2 commas for Field4) in event  1. Same thing is happening for other events where there is now value between 2 commas. How would I write my field extraction expression (or (REGEX)  ) to extract  14 fields from each of the events considering some fields may not have values (i.e., no values between 2 commas). Any help will be highly appreciated. Thank you so much, appreciate your support in this efforts.   23SRFBB,HESR2,000000000,,TRY5gNbkVnedIIRbrk0A3wWOtE4L,12.218.76.129,2021-10-13 06:39:48 MDT,ISDMCISA,LOGOFF,USER,,,, 34SWFBB,RESG3,000000000,10AB,TFG3nNbkVnedIIDFbrk0A3wWOtE4L,,2021-10-13 06:39:48 MDT,ISDMCISA,LOGOFF,USER,,,, 45SRFBB,SES3X,000000000,,FDTt3nNbkVnedIIBSbrk0A3wWOtE4L,12.218.76.129,2021-10-13 06:39:48 MDT,ISDMCISA,LOGOFF,USER,,,1wqa,XY355
Hi how can I find events that contain non english words? e.g i have log file that some lines contain germany or arabic words, how can i recognize these lines? thanks
Trying to implement an alert on detecting spikes in logged events in our Splunk deployment and not sure how to go about it... For example: Have 15 hosts with varying levels of sources within each.... See more...
Trying to implement an alert on detecting spikes in logged events in our Splunk deployment and not sure how to go about it... For example: Have 15 hosts with varying levels of sources within each... one of my sources in a host averages about 5-6k events per day over the past 30 days; then out of the blue, we're hit with 1.3 million events on one of the days. Assuming the alert would need to be tailored to each host (or source, not sure) and would need an average number of events over a "normal" week to compare to when there's a spike? Any help would be greatly appreciated.  
Hello Splunkers!!   When i am upgrading my web HF from 8.0.0 to 8.1.2 then i am getting below error. Please let me what is available workaround for the below issue.   The TCP output processor has... See more...
Hello Splunkers!!   When i am upgrading my web HF from 8.0.0 to 8.1.2 then i am getting below error. Please let me what is available workaround for the below issue.   The TCP output processor has paused the data flow. Forwarding to host_dest=<indexer_name> inside output group splunkcloud from host_src=<HF_NAME> has been blocked for blocked_seconds=1970. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
Hello, My team is interested in using Grand Central but we'd like to compare the templates generated by Grand Central to Trumpet to see if there are any significant differences before making the swi... See more...
Hello, My team is interested in using Grand Central but we'd like to compare the templates generated by Grand Central to Trumpet to see if there are any significant differences before making the switch. We haven't been able to get the app to work when importing it to our search head, so I'm hoping you can provide something generic. We are trying to capture the following sources if we use Grand Central: CloudTrail Config Notifications GuardDuty VPC Flow Logs If you need anymore information, please let me know. Best, Faleon
I use Ansible to install and configure Splunk Universal Forwarder on multiple servers.  However, it's difficult to maintain a link to the product, the links are not consistent and contain unpredictab... See more...
I use Ansible to install and configure Splunk Universal Forwarder on multiple servers.  However, it's difficult to maintain a link to the product, the links are not consistent and contain unpredictable data, such as the git commit for the version.  Is there a link that would allow me to generate a link dynamically only knowing non-changing properties of the download or predictable values, such as version, platform, package type, etc.  For example: "https://download.splunk.com/products/universalforwarder/releases/{{ splunk_version }}/linux/splunkforwarder-{{ splunk_version }}-{{ splunk_os }}-{{ splunk_arch }}.{{ splunk_pkg }}" or to get the latest version something simple like: https://download.splunk.com/products/universalforwarder/releases/latest  
I had encoutered an interesting question from my client/security SME 1. Which one is better. To have Splunk Security Essentials or to retain Enterprise Security + Content updates? 2. Where are the ... See more...
I had encoutered an interesting question from my client/security SME 1. Which one is better. To have Splunk Security Essentials or to retain Enterprise Security + Content updates? 2. Where are the detection rules kept in Splunk Security Essentials kept?   As far as I understand the Splunk ES content update is quite easy to understand and we can customise the savedsearches.conf (rules) to fit our environment. On other hand, Splunk security Essentials, we couldn't figure out where the rules exist and modify them. Any ideas how to get the detection rules of Splunk Essentials? Also what would be the future direction of these developments? wanted to stick to one of them if possible
Hi i have two field "servername" "code". i need to extract percent of code by servers. index="my-index" | table servername code expected output: servername code percent count server1           4... See more...
Hi i have two field "servername" "code". i need to extract percent of code by servers. index="my-index" | table servername code expected output: servername code percent count server1           404    50%        50                               500    40%       40                               401    10%       10 server2           404     55%       55                               500     30%       30                               401    15%       15 any idea? thanks
The following do not give the IP for the Splunk Enterprise Security (ES). Is there a better SPL to provide the list of all Splunk instances names, IPs. Specially the ES? Thanks a million in advance. ... See more...
The following do not give the IP for the Splunk Enterprise Security (ES). Is there a better SPL to provide the list of all Splunk instances names, IPs. Specially the ES? Thanks a million in advance.   | rest /services/server/sysinfo splunk_server=local | table splunk_server | rest /services/server/sysinfo splunk_server=local | table splunk_server | lookup dnslookup clienthost as splunk_server OUTPUT clienthost as ipAddress  
Hello all  I'm using Splunk Cloud Platform and i want to know how to acces to this URL. URL : /splunkd/__raw/services/data/lookup_edit/lookup_contents I saw it there https://lukemurphey.net/projec... See more...
Hello all  I'm using Splunk Cloud Platform and i want to know how to acces to this URL. URL : /splunkd/__raw/services/data/lookup_edit/lookup_contents I saw it there https://lukemurphey.net/projects/splunk-lookup-editor/wiki/REST_endpoints but i don't really understand how it works.  If someone can help me ! @LukeMurphey @Anonymous  Thank you all
Hello,   We received data from Alicloud and found there are alot of duplicate fields populate in Interesting fields like source , source_ . Is there any query i can use to check how many fields an... See more...
Hello,   We received data from Alicloud and found there are alot of duplicate fields populate in Interesting fields like source , source_ . Is there any query i can use to check how many fields and events are duplicate?